New tools require new rules?

A hammer can hit a nail on the head, or it can hit you (or your enemy) on the head. Most, if not all, tools have multiple uses, some good and some bad.  Societies adopt rules to promote the beneficial uses of technologies and discourage harmful uses. New tools/technologies necessitate a discussion of what the rules for their proper uses should be. We are now having that discussion for the uses of social media to promote and propagate ideas and information (some true and some false).

Free speech is revered in America for good reason. Like many other aspects of our preference for self-reliance (personal freedom), it requires that we take responsibility for sorting out what is true from what is false rather than giving over that task to government (and whoever leads it at the time). This can be a challenging task.  We must sort out who we trust to help us. Those of you my age will appreciate that we no longer have Walter Cronkite, and Huntley and Brinkley to help us filter real from fake news.

Our commitment to free speech is so fundamental to the character of America that I have written about it a number of times. https://wcoats.blog/2012/09/14/american-values-and-foreign-policy/    https://wcoats.blog/2012/09/15/further-thoughts-on-free-speech/ https://wcoats.blog/2012/09/29/freedom-of-speech-final-thoughts-for-a-while-at-least/

Various social media platforms present us with another new tool and the need to sort out how best to use it. The answer(s) will take the form of social conventions and government regulations. It is important to get the balance right.

Facebook, Twitter, Google, YouTube, Instagram, Tiktok and other platforms do not generate or provide content. They provided a very convenient and powerful means for you and me to share the content we produce. What responsibility should Mark Zuckerberg, Jack Dorsey, Larry Page, Sergey Brin, etc. have for regulating the content we post to their own platforms, which are after all private. As you saw in my earlier blogs on this subject, publishing and broadcasting our words are limited when they endanger or slander others. But these limits do not and should not limit our advocacies for policies and political beliefs as I am doing now.

The big issue today is fake news (out right lies). If you create or repeat lies, you must be responsible for what you do (but we don’t generally punish lying unless under oath). You are allowed, for example, to state on Twitter or Facebook that you believe Obama was born in Kenya despite thorough documentation that he was born in Hawaii. Perhaps you are gullible enough to actually believe it though it is false. But should Facebook and other platforms have a responsibility to block clearly fake news? What if their own biases lead them to block more Democratic Party “fake news”, or vice versa?

As a private company Facebook can more or less do what it wants but it has a strong business/financial incentive to build a reputation of fairness and to provide a platform that attracts as many users as possible. Here are their rules from their website:

“To see the full list and learn more about our policies, please review the Facebook Community Standards.  Here are a few of the things that aren’t allowed on Facebook:

  • Nudity or other sexually suggestive content.
  • Hate speech, credible threats or direct attacks on an individual or group.
  • Content that contains self-harm or excessive violence.
  • Fake or impostor profiles.
  • Spam.”

The debate at the moment is focused on political ads. Facebook has said that it will not fact check political ads and Tweeter has said that it will not run them at all.  A Washington Post editorial stated the issue this way: “Politicians should, for the most part, be able to lie on Facebook, just as anyone else is, and the public should be able to hold leaders to account. But that’s a different question from whether politicians should be able to pay to have their lies spread, based on unprecedentedly precise behavioral data, to the voters who are most likely to believe their lies.”  “Google’s reply has been more nuanced. The company will limit the criteria campaigns can use to “microtarget” ads to narrow audiences based on party affiliation or voter record. The aim is to increase accountability by letting more people see ads….”  “Tech-firms-under-fire-on-political-ads”

No one, thank heavens, wants the government to vet ads for truthfulness. Some facts are obvious and some are less so. The potential danger to free speech is illustrated by Singapore’s “fake news” law.  Singapore claimed that a post by fringe news site States Times Review (STR) contained ‘scurrilous accusations’.  Giving in to the law, Facebook attached a note to the STR post that said it “is legally required to tell you that the Singapore government says this post has false information”.  “Facebook’s addition was embedded at the bottom of the original post, which was not altered. It was only visible to social media users in Singapore.” https://www.bbc.com/news/world-asia-50613341

However, the government should provide the broad framework of a platforms responsibilities.  For example, the U.S. government requires transparency of who pays for ads in print and TV ads. The same requirement should be imposed on Internet political ads. To qualify for Facebook’s say whatever you want in a political ad policy, the candidate being supported should be required to attach his/her name as approving the ad. Limiting the use of micro targeted ads broadens the exposure and thus discipline on truth telling.  According to The Economist: “To the extent that these moves make it harder for politicians to say contradictory things to different groups of voters without anybody noticing, they are welcome. “Big-tech-changes-the-rules-for-political-adverts”

Knowing what sources of news to trust is no trivial matter. Knowing the source is helpful. Rather than fact checking the content of posts, Facebook attaches an easily viewed statement of the source.  Establishing standards for and establishing boundaries between categories of posts sound easier than they really are, but insuring transparency of who has posted something should play an important role. Flagging questionable sources, without changing the content of a post, as Facebook does, is also helpful. I hope that the discussion of the best balance (and not every platform needs to adopt the same approach) will be constructive.

Alex Jones

Alex Jones and his Infowars website have been removed and banned from YouTube, Facebook, Apple, and Spotify among the most popular social media platforms.  As of this moment, Twitter claims to be reviewing CNN claims that Jones and Infowars violate Twitter’s standards.  What should we think about this?

Jones has made many ridiculously false claims, such as the belief that Sept. 11 was an inside job, that the Sandy Hook massacre never happened and that Michelle Obama is a transgendered person with male genitalia.  “An InfoWars video posted in July 2018 falsely declared that the ‘CIA admits transgenderism is a plot to depopulate humanity.’” Twitter-Infowars-Alex Jones But accuracy and honesty haven’t been criteria for banning posts or President Trump’s tweeter account would have been closed long ago. Who is to decide whose lies can be tweeted and whose can’t?

Hate speech, which violates Twitter’s rules, is another matter, as is the promotion of violence.  Twitter’s rules state that it does “not tolerate” content “that degrades someone.”  President Trump violates this rule as well on a regular bases.

What should we do about the lies and hate that are regularly posted on the Internet?  I agree with Kimberly Ross who said that: “It is imperative that we don’t view those like Alex Jones, who peddle in fear-mongering and lies, as harmless. In fact, we should actively call out such appalling behavior….  We should never wait around for the Left to come in and clean up our side.  We should do that ourselves.  Individuals like Jones who manufacture outrage and spread falsehoods should find that the market on the Right for their wares is minuscule.”  Dont-defend-Alex-Jones-but-dont-let-the-government-get-into-censorship-either

Several important policy issues arise from this.  We should challenge what we believe to be lies and hatred ourselves.  Our First Amendment protection of free speech rightly prevents the government from deciding what is true and what is hateful and banning it.  Few of us would be happy letting Stephen Miller, a nasty minded White House Adviser, determine what could be posted on Facebook about American experience with immigrants.  Jonathan Rauch has updated his wonderful book Kindly Inquisitors: The New Attacks on Free Thought,in which he argues that the best defense against fake news and hateful speech is to exercise our free speech to challenge it.  Kindly-Inquisitors-Attacks-Free-Thought. See also his short essay on this subject:  “Who-will-regulate-hate-speech”.

Facebook and Twitter are private companies and should be free to set whatever policies for access that they want.  On the other hand they come close to being public utilities like telephone companies and Internet access providers who should not be allow to block access to the Alex Joneses of the world because they lie and spread hate.  This deserves further thought.

Turning to government to protect us from every unpleasantry we might encounter weakens us and takes us in the wrong direction.  Those who defend protecting us from hate speech with “safe zones” and “trigger warnings” reflect a paternalistic attitude toward the responsibilities of our government and of ourselves as citizens of a free society.  Like the well-meaning, but ultimately harmful, helicopter moms, we risk creating a society of wimps dependent on government for far more than is healthy for a free society.  Part of our training as we grow up and encounter a sometimes nasty world should be to stand up and challenge falsehood and hate when we encounter it.  Safe zones deprive us of such training.  It’s our job to counter lies and hate, not the government’s.