Why Facebook’s Hate-Speech Policy Makes So Little Sense

Posted in: Technology Law

A recent New York Times article disclosed details of Facebook’s global effort to block hate speech and other ostensibly offensive content. As the article explains, Facebook has good reason to worry that some people use its platform not just to offend but to undermine democracies and even to incite deadly violence. Yet Facebook’s response seems curious, even perverse. Presumably well-meaning young engineers and lawyers gather every other week to update thousands of PowerPoint presentations, rules, and guidelines for its roughly 15,000 relatively low-skilled “moderators” to apply formulaically to sort permissible from verboten posts—sometimes using Google Translate for material in languages they do not know.

It is easy to condemn Facebook’s approach to hate speech as ham-fisted. It is considerably harder to fashion a perfect alternative. Below I consider the causes and effects of Facebook’s policy. As I shall explain, the issues it raises implicate tough questions regarding free speech more broadly, both in the US and elsewhere.

Private Versus Public Censorship

Most countries—including most constitutional democracies—forbid hate speech. The US does not.

As construed by the Supreme Court in cases like R.A.V. v. St. Paul and Virginia v. Black, the First Amendment protects very offensive speech against censorship, unless the speech in question constitutes incitement—strictly defined in the leading case of Brandenburg v. Ohio to cover only those words and symbols that are intended and likely to inspire “imminent lawless action.” The Constitution even protects hate speech that has an historical association with violence. The defendants in each of the cases just cited were charged with cross-burning, and yet they all won.

Accordingly, in the US, the government may neither directly censor hate speech nor require private companies like Facebook to do so. However, the Constitution does not apply to private actors, and Facebook has adopted its policy voluntarily. Thus, the policy does not raise a constitutional question here.

Yet given Facebook’s reach, one might worry that it exercises the kind of power that governments typically do and thus should be subject to the same sorts of restrictions that apply to governments—including constitutional limits. In the early twentieth century, some progressive thinkers, including Louis Brandeis, who would go on to become a leading free-speech champion on the Supreme Court, made just that sort of argument. Highly concentrated private power, these progressives said, can pose at least as great a threat to liberty as government power does. If Facebook is effectively the only game in town, then it does not much matter from the user’s perspective whether Facebook censors in response to a government mandate or of its own volition.

Moreover, it is not fully accurate to say that Facebook censors speech “voluntarily.” Fewer than fifteen percent of Facebook’s users live in the US. Hundreds of millions live in countries that forbid hate speech. As a recent European Union report underscores, much of the world understands human rights treaties not only to permit censorship of hate speech but to affirmatively require it.

It thus seems likely that Facebook censors hate-speech speech globally in response to legal obligations to do so in many of the countries in which it operates. Americans who prefer the no-holds-barred approach of our constitutional case law may worry that Europeans and others who take a more restrictive view are effectively imposing their hate-speech regime on us.

Whether we should prefer the no-holds-barred approach is a highly contested question. In this brief column, I can hardly do it justice, so I shall merely refer interested readers to a couple of excellent books arguing opposite sides of the question. Professor Jeremy Waldron of Oxford and NYU Law School defends the European approach in The Harm in Hate Speech. New York Law School Professor (and former ACLU President) Nadine Strossen defends the US approach in HATE: Why We Should Resist It With Free Speech, Not Censorship.

If you find Strossen persuasive, then you may well regard the export of European-style hate-speech censorship to the US an unfortunate consequence of the fact that the US Constitution does not apply to private actors like Facebook—the taking advantage of a kind of loophole. If you find Waldron persuasive, you might be grateful for that loophole, but you also might be concerned about the fact that the people doing the censoring are ultimately accountable only to Facebook’s shareholders rather than to the American People via the democratic process.

Rules Versus Standards

In addition to questioning the fact that Facebook censors hate-speech, one might worry about the way it does so. Whether any particular Facebook post poses a genuine danger will typically require a context-sensitive judgment by a person familiar with the language, culture, and locale in question. The rigid application of Facebook’s rules, by contrast, inevitably results in a pattern of grossly under- and over-inclusive censorship—ensnaring innocent speech while dangerous material slips the net.

So why does Facebook do it this way? The short answer is that the alternative to rigid rules—a flexible standard—has its own mirroring vice: flexible standards vest discretion in whoever makes the decision in each case, and such discretion will be exercised inconsistently from decision maker to decision maker, while also being subject to abuse. The sort of person most likely to understand the contextual meaning of a Facebook post will also typically have a political perspective or other bias that can infect the decision whether to censor it.

In regulating speech via rigid rules rather than flexible standards, Facebook follows roughly in the path of Supreme Court case law construing the First Amendment. Even when government may limit speech—as in licensing public property for expressive activities such as marches and rallies—it may only do so pursuant to rules sufficiently determinate to avoid granting the licensor “unbridled discretion.”

To be sure, there is an important additional limit in the First Amendment context. Licensors must use relatively determinate rules that aim only at the time, place, or manner of speech, not at its content. By contrast, the Facebook censors take aim at content.

But even First Amendment law occasionally allows restricting speech based on content—especially where the speech falls into one of a small number of what the Court has identified as categories of “unprotected speech.” How does the Court draw the boundaries? In a near-unanimous 2010 case, Chief Justice Roberts asserted that the unprotected categories were historically outside of the “freedom of speech” to which the First Amendment refers, but, with due respect, that is at best law office history. The Court has made no serious effort to check what my colleague Professor Steven Shiffrin aptly calls its “frozen categories” against the understanding of the People in 1791 (when the states ratified the First Amendment) or in 1868 (when they ratified the Fourteenth Amendment, which makes the First Amendment enforceable against the states).

The Supreme Court’s categories of unprotected speech in fact arose based on what First Amendment scholar Melville Nimmer described (in an extremely important 1968 article in the California Law Review) as a process of “definitional balancing,” in which the Court weighed the value of categories of speech against the harm they caused for purposes of deciding whether to count a category as protected at all. By focusing on the categorical level, Nimmer argued, courts could eschew the problematic exercise of discretion inherent in “ad hoc balancing”—that is, balancing the costs and benefits of speech in particular cases.

As Nimmer himself explained, the categorical approach is hardly perfect, but the vices it has are the vices typical of rules: under- and over-inclusiveness as applied to unanticipated or unusual cases. And it is that very same judgment—a greater willingness to tolerate the vices of rules than to tolerate the vices of standards where speech is concerned—that underlies Facebook’s efforts to drastically curtail the discretion it gives its moderators.

So yes, Facebook’s rules are ridiculous. But given the legal imperative to censor hate speech in many of the countries in which it operates, Facebook may not have any especially good alternatives.

Comments are closed.