Speech Policies for Information Platforms Are Hard
Speech Policies for Information Platforms Are Hard
Let the speech flow! Generally speaking, that should be the default policy that private information and communications companies adopt when devising speech policies for their platforms. Yet, it won’t always be the rule for a simple reason: When you serve a diverse audience, someone is always going to complain that you’re not doing enough to cater to their unique values, interests, or sensitivities.
This issue has renewed salience this week in the wake of the horrific attacks on U.S. consulates and embassies, which some blamed on an anti-Muslim video that was widely distributed globally on YouTube, which is owned by Google. As a result, notes CNet.com’s Declan McCullagh, “Google has found itself embroiled in a high-profile dispute pitting the traditional western value of free speech against Islam’s strict proscription against blasphemy.” The search provider decided to block the video in Libya and Egypt, although users elsewhere will still be able to see it (assuming Google and others don’t cave toindirect pressure from the White House to take it down entirely).
Every day, in countless ways both big and small, modern information intermediaries are confronted with conundrums like this. Google, Facebook, Microsoft, Apple, Twitter, Tumblr, broadband operators, wireless providers, cable companies, etc., are routinely inundated with complaints from average users, corporations, concerned citizens, activist groups, and other constituencies. All of them have a different beef they want addressed—spam, privacy, porn, violent content, defamation, harassment, hate speech, religious persecution, terrorism, and more. In a column this January, I noted how some academics even want Google to act as a veritable “Ministry of Truth” and somehow weed out medical quackery and conspiracy theorists.
The consistent thread here is that many people believe online operators should do more to police their networks or platforms for speech or expression that they find objectionable or dangerous. At the same time, however, plenty of others demand that those same information and communications intermediaries do less to police speech online—especially the speech that they strongly favor.
This week, for example, the National Religious Broadcasters (NRB), a non-partisan association that represents Christian broadcasters and communicators, proposed a “Free Speech Charter for the Internet,” to encourage Internet intermediaries to adopt the most free speech-friendly policies possible. Specifically, NRB wants to ensure that religious expression, including views on gay marriage and abortion, is not classified as hate speech and blocked by private companies. To accomplish that goal, NRB has called on all private platform providers to voluntarily abide by the same stringent First Amendment standards that apply to government actors.
That is a very high bar in terms of the constraints it places on speech policies. Under current Supreme Court jurisprudence, governments may only limit speech in extreme cases, mostly to prevent incitement to immediate violence. Almost everything else is off limits. The First Amendment now firmly protects profane and indecent speech, violent content, hate speech, and more. In the 2002 case Ashcroft v. Free Speech Coalition, the Supreme Court even held that virtual depictions of child pornography could not be prohibited since there was no obvious victim of such expression. And in the 2010 case United States v. Stevens, the Court struck down a federal law that criminalized the creation or sale of videos showing actual animal cruelty.
Not everyone is happy with the absolutist approach our nation has adopted to free speech matters, but it’s the law of the land now and it places very strict limits on state action aimed at limiting expression.
While some of us appreciate those limits on government speech control, it’s not clear that private operators should be held to the same standard that constrains governments. It’s easy to sympathize with NRB’s suggestion that the best default speech policy should be a strong presumption in favor of letting speech flow with only a few very narrow exceptions, but should all private platforms really apply that high-bar standard all the time?
The answer is no. We don’t necessarily want all providers to apply the same absolute free speech standard because citizens expect to be able to find platforms that cater to their specific interests or values. A diverse citizenry demands diverse platforms and communications choices. The First Amendment protects the editorial discretion of private providers, not just because they have property rights in their platforms, but also because we value the diversity of speech policies and platforms that develops from this approach.
Think about it. Most of us wouldn’t want a single, over-arching speech code for all media platforms that perfectly mimics the standard we apply to governments. While the First Amendment blocks almost all government efforts to limit sexually-explicit or extremely violent content, hate speech, animal cruelty videos, and even virtual child porn, most people don’t want to see that sort of content popping up regularly when they visit Facebook or Apple’s App Store, for example. That’s why both of those companies and others screen content and apps to ensure it doesn’t appear. Of course, there are plenty of other places on the Internet to find such content for those who desire it.
The issue gets stickier when it involves speech about religion, gay marriage, and abortion politics. For these issues, one person’s “legitimate religious expression” is another’s “hate speech.” Precisely because these issues are so uniquely contentious and common ground is so difficult to find, it means that both sides will push for information intermediaries to take their side in these spats.
For example, in 2010, Apple blocked the “Manhattan Declaration” app, which was developed by a Christian group and included language condemning gay marriage. Apple claimed the app violated its App Store developer guidelines “by being offensive to large groups of people.” Again, that’s Apple’s right even if some of us would prefer a more tolerant approach to user expression. Moreover, all iPhone users need to do to find the full text of the “Manhattan Declaration” is pull up their web browser and search for it.
Facebook also made news recently when it blocked aNew Yorker cartoon depicting the biblical characters Adam and Eve in the nude. The site later admitted it was a mistake and reinstated the cartoon but it’s another sign of the challenges Facebook faces as it tries to develop content policies for a 900-million strong online community.
And then there’s Google’s decision to block the anti-Muslim video that has caused so much angst overseas this week. Was it the right move? Those of us who want private companies to uphold free speech values even in hostile overseas environments are disappointed. Then again, Google, or at least its popular YouTube service, might be shut down entirely throughout the Middle East if it didn’t comply with some local standards. Isn’t it better that YouTube remains available to citizens in these countries even if a few videos get blocked? Again, that isn’t optimal from my own strongly pro-free speech perspective, but that’s the inescapable trade-off Google faces if it wants to do business there.
There are no easy answers for these firms because, when it comes to private speech and conduct policies, it’s impossible to make everyone happy, whether it’s American citizens or governments a half a world away. But we can at least be thankful that we are blessed to live during a time when all our interests and values can be served with a diverse set of information and communications platforms so that we can find—or avoid—whatever content we want.Comments