A hammer can hit a nail on the head, or it can hit you (or your enemy) on the head. Most, if not all, tools have multiple uses, some good and some bad. Societies adopt rules to promote the beneficial uses of technologies and discourage harmful uses. New tools/technologies necessitate a discussion of what the rules for their proper uses should be. We are now having that discussion for the uses of social media to promote and propagate ideas and information (some true and some false).
Free speech is revered in America for good reason. Like many other aspects of our preference for self-reliance (personal freedom), it requires that we take responsibility for sorting out what is true from what is false rather than giving over that task to government (and whoever leads it at the time). This can be a challenging task. We must sort out who we trust to help us. Those of you my age will appreciate that we no longer have Walter Cronkite, and Huntley and Brinkley to help us filter real from fake news.
Our commitment to free speech is so fundamental to the character of America that I have written about it a number of times. https://wcoats.blog/2012/09/14/american-values-and-foreign-policy/ https://wcoats.blog/2012/09/15/further-thoughts-on-free-speech/ https://wcoats.blog/2012/09/29/freedom-of-speech-final-thoughts-for-a-while-at-least/
Various social media platforms present us with another new tool and the need to sort out how best to use it. The answer(s) will take the form of social conventions and government regulations. It is important to get the balance right.
Facebook, Twitter, Google, YouTube, Instagram, Tiktok and other platforms do not generate or provide content. They provided a very convenient and powerful means for you and me to share the content we produce. What responsibility should Mark Zuckerberg, Jack Dorsey, Larry Page, Sergey Brin, etc. have for regulating the content we post to their own platforms, which are after all private. As you saw in my earlier blogs on this subject, publishing and broadcasting our words are limited when they endanger or slander others. But these limits do not and should not limit our advocacies for policies and political beliefs as I am doing now.
The big issue today is fake news (out right lies). If you create or repeat lies, you must be responsible for what you do (but we don’t generally punish lying unless under oath). You are allowed, for example, to state on Twitter or Facebook that you believe Obama was born in Kenya despite thorough documentation that he was born in Hawaii. Perhaps you are gullible enough to actually believe it though it is false. But should Facebook and other platforms have a responsibility to block clearly fake news? What if their own biases lead them to block more Democratic Party “fake news”, or vice versa?
As a private company Facebook can more or less do what it wants but it has a strong business/financial incentive to build a reputation of fairness and to provide a platform that attracts as many users as possible. Here are their rules from their website:
“To see the full list and learn more about our policies, please review the Facebook Community Standards. Here are a few of the things that aren’t allowed on Facebook:
- Nudity or other sexually suggestive content.
- Hate speech, credible threats or direct attacks on an individual or group.
- Content that contains self-harm or excessive violence.
- Fake or impostor profiles.
The debate at the moment is focused on political ads. Facebook has said that it will not fact check political ads and Tweeter has said that it will not run them at all. A Washington Post editorial stated the issue this way: “Politicians should, for the most part, be able to lie on Facebook, just as anyone else is, and the public should be able to hold leaders to account. But that’s a different question from whether politicians should be able to pay to have their lies spread, based on unprecedentedly precise behavioral data, to the voters who are most likely to believe their lies.” “Google’s reply has been more nuanced. The company will limit the criteria campaigns can use to “microtarget” ads to narrow audiences based on party affiliation or voter record. The aim is to increase accountability by letting more people see ads….” “Tech-firms-under-fire-on-political-ads”
No one, thank heavens, wants the government to vet ads for truthfulness. Some facts are obvious and some are less so. The potential danger to free speech is illustrated by Singapore’s “fake news” law. Singapore claimed that a post by fringe news site States Times Review (STR) contained ‘scurrilous accusations’. Giving in to the law, Facebook attached a note to the STR post that said it “is legally required to tell you that the Singapore government says this post has false information”. “Facebook’s addition was embedded at the bottom of the original post, which was not altered. It was only visible to social media users in Singapore.” https://www.bbc.com/news/world-asia-50613341
However, the government should provide the broad framework of a platforms responsibilities. For example, the U.S. government requires transparency of who pays for ads in print and TV ads. The same requirement should be imposed on Internet political ads. To qualify for Facebook’s say whatever you want in a political ad policy, the candidate being supported should be required to attach his/her name as approving the ad. Limiting the use of micro targeted ads broadens the exposure and thus discipline on truth telling. According to The Economist: “To the extent that these moves make it harder for politicians to say contradictory things to different groups of voters without anybody noticing, they are welcome. “Big-tech-changes-the-rules-for-political-adverts”
Knowing what sources of news to trust is no trivial matter. Knowing the source is helpful. Rather than fact checking the content of posts, Facebook attaches an easily viewed statement of the source. Establishing standards for and establishing boundaries between categories of posts sound easier than they really are, but insuring transparency of who has posted something should play an important role. Flagging questionable sources, without changing the content of a post, as Facebook does, is also helpful. I hope that the discussion of the best balance (and not every platform needs to adopt the same approach) will be constructive.