Last week, the Facebook Oversight Board delivered a mixed verdict regarding the company’s treatment of former President Donald Trump’s account. The Board said that Facebook was justified in removing Trump’s incendiary posts during and immediately after the January 6 insurrection. The Board likewise approved of Facebook’s decision to suspend Trump’s account in the days and weeks that followed, while a further threat of violence at or around the Capitol remained high. However, the Board also ruled that the “indefinite” suspension of Trump’s Facebook account could not be evaluated, given the lack of clear standards Facebook used to make the suspension decision and the similar lack of clear standards for reinstating an account.
There is much good sense in the Board’s ruling. Still, although Facebook can and should do better in ways the Board suggests, in at least one respect the Board’s decision asks the impossible: it demands clear rules that take careful account of contextual nuance. As I explain below and as legal scholars have long known, ex ante predictability and contextual customization are often in tension with one another.
The Board’s Origin Story and Powers
Facebook is a private company that operates around the world and thus must comply with multiple bodies of law. For example, some countries restrict hate-speech. The United States does not, but private companies are not bound by the U.S. Constitution and indeed, have their own rights of freedom of speech, including the right to limit the speech (by contract or otherwise) of users of their platforms. Further, a statute, Section 230 of the federal Communications Decency Act, provides Facebook and other providers of Internet services with some protection against liability for content posted by users. Constitutional, statutory, and regulatory requirements exist in other countries as well, as do limits imposed by international law and Facebook’s own policies that seek to balance free expression against community safety and other values.
Facebook implements its policies about permissible and impermissible content by employing algorithms and thousands of human moderators. As I discussed in a 2019 column here on Verdict, the enormous volume, scope, and diversity of Facebook postings make the task Herculean. Thus, the policies often misfire, sometimes eliminating material that should be allowed, while other times permitting violations to slip through the cracks.
In most circumstances, Facebook can correct its mistakes at relatively low cost. Improperly removed content and suspended accounts can be restored; proscribable content that was allowed to remain visible can be belatedly deleted. However, sometimes the stakes are very high. Content that violates Facebook’s policies might not simply cause offense or hurt but could be crucial in fomenting ethnic cleansing or a riot. Conversely, applications of Facebook’s limits to political leaders like a sitting or former U.S. president underscore the awkward fact that a private company has enormous political power.
Facebook created the Oversight Board in order to address high-stakes cases. It has been unofficially dubbed the “Supreme Court of Facebook” because it reviews decisions by Facebook (and Instagram, which Facebook owns) when someone challenges those decisions. The Oversight Board is like the U.S. Supreme Court in that it chooses which cases to decide. Just as Rule 10 of the Supreme Court Rules explains that the justices exercise discretion to hear only important cases, so the Facebook Oversight Board’s appeals process reserves review for cases that present “difficult, significant and globally relevant” issues.
The Board, which currently has twenty very distinguished members with diverse backgrounds and perspectives, hears cases in panels of five. Panel decisions are reviewed by the entire Board before being made public. Published decisions do not include dissents or separate concurrences. For example, in the Trump case, the final opinion indicates that while a majority thought the risk of violence justified Facebook’s initial actions, a minority would have gone further and relied on Trump’s pattern of racism and assaults on dignity. The failure to note individual votes or to publish separate dissents or concurrences may seem odd, but until recently that was the pattern in many parts of the world, including constitutional courts of Europe, and remains the practice in some places even today.
Although the Board is court-like in many respects, it is not a court. Perhaps for that reason, Facebook has taken various steps to make the Board’s decisions credible. These include providing a dedicated funding stream for the Board’s operations, publicly committing to treat the Board’s rulings as binding, and naming well-respected independent-minded thinkers to the Board.
The Ruling Against Trump
The Oversight Board heard submissions from the public and people more directly involved in the Trump case, including the former president’s own lawyers. The Board unanimously ruled against Trump on two significant points.
First, the Board said that Trump’s claims that he won the 2020 presidential election but had it stolen from him were false. The Board matter-of-factly and repeatedly noted that Trump never provided any evidence for these claims. Of course, the rejection of Trump’s lies will only undermine the Board’s credibility with Trump supporters, notwithstanding the credentials and ideological diversity of its members. As the experience of Congresswoman Liz Cheney illustrates, no matter how conservative someone has been, willingness to tell and act on the truth about Trump is itself discrediting to those in the thrall of or cowed by his personality cult.
Second, the Board linked Trump’s inflammatory lies to the January 6 violence at the Capitol. During and since the insurrection, Trump and his apologists have repeatedly sought to downplay Trump’s rhetoric. For example, during Trump’s second impeachment trial, his lawyers showed a video of Democratic politicians using words like “fight” to make the argument that Trump’s statements—on social media and in the rally that immediately preceded the assault on the Capitol—were garden-variety political rhetoric. His lawyers made similar arguments to the Oversight Board.
The Board would have none of it, rightly pointing out the importance of context. While his supporters were engaging in mayhem, Trump took to Facebook, not to urge them in no uncertain terms to stop immediately, but to tell them he loved them and to repeat the very incendiary lies that had inflamed them in the first place. True, as Trump’s lawyers emphasized, Trump’s grievance-soaked Facebook posts also included requests that his brownshirts “go home,” but such a disclaimer does not cancel out the inflammatory nature of the balance of the posts.
Consider a hypothetical variation on a tragic real case. Leon Mugesera is currently serving a life sentence in Rwanda for incitement to genocide for telling a mob of Hutu supporters in 1992 that members of the Tutsi minority were “cockroaches” whose throats should be slit. His lawyer complained in 2013 to a newspaper in Canada (from where Mugesera was extradited) that the trial was unfair because there was no complete transcript of his speech. Suppose that such a transcript existed and it revealed that in addition to successfully inciting the Hutu mob to murder nearly a million of Rwanda’s Tutsi minority, Mugesera had also said “be peaceful and go home.” No one would think that such a perfunctory disclaimer would or should mitigate Mugesera’s guilt, because such insincere language would not have been calculated to avert, and would not have averted, the violence that ensued. Likewise, the Facebook Oversight Board rightly gave Trump’s insincere “go home” the back of its hand.
How Facebook Falls Short—and How the Board Did Too
Despite affirming that Facebook acted in accordance with its own standards and general principles of free speech in response to Trump’s January posts, the Oversight Board rejected Facebook’s “indefinite” suspension of Trump’s account. The Board did not say that Trump is necessarily entitled to have his account reinstated. Rather, the Board concluded that Facebook should set a determinate penalty—such as a permanent ban or one lasting for a fixed rather than indefinite period. The Board allowed the suspension to continue for now but directed Facebook to adopt a clear policy and apply it to Trump within six months.
The Oversight Board also tacitly criticized Facebook for failing to answer “questions about how Facebook’s news feed and other features impacted the visibility of Mr. Trump’s content.” That criticism is fair. One can presume that Facebook refused to answer because doing so might reveal to the Board and potentially the broader public the prized but highly secret algorithms Facebook uses for determining when and how widely Facebook and Instagram posts are seen. Perhaps that makes sense from a business perspective, but insofar as the entire review process is meant to ensure accountability, the withholding of the information undercuts the credibility of Facebook’s policies and their implementation.
The Oversight Board’s most biting criticism was the following statement: “In applying an indeterminate and standardless penalty and then referring this case to the Board to resolve, Facebook seeks to avoid its responsibilities.” That complaint is partly but only partly fair.
The Board is surely correct that a commitment to free speech requires the articulation and even-handed application of uniform standards. Otherwise, Facebook risks reinforcing the perception and perhaps even the reality that it distinguishes among speech and speakers not based on the different dangers they pose to safety and human rights—which should be acceptable—but based on agreement or disagreement with various political opinions within a range of reasonable disagreement—which would be a kind of private censorship that Facebook purports to eschew.
Yet the Board’s critique is wrongheaded in two respects. First, there is nothing especially problematic about Facebook referring a difficult question to the Board, even if what Facebook seeks is guidance about its general policies rather than the application of those policies to a particular case. True, in the U.S. and many other legal systems, courts typically prefer to review rather than formulate policy, but even in the U.S., courts set out general principles for regulated actors, including governments, to follow. Moreover, to repeat, the Oversight Board is not a court, so it is not bound by traditional principles of judicial restraint. And in any event, despite the Board’s annoyance, Part 10 of its opinion is a “policy advisory statement” that can and likely will provide Facebook with some of the guidance the Board suggested it cannot provide.
To be sure, the Board opinion does not frame the “advisory statement” as the requisite policy, instead urging Facebook “to develop and publish” various policies, but the Board’s opinion appears naïve in that respect. Yes, Facebook can provide greater detail and transparency about its content standards, penalties, and procedures. But the Board’s own opinion shows why there is a limit to just how specific Facebook can be.
The Board’s opinion is about the same length as a typical U.S. Supreme Court opinion (just under ten thousand words). In that space, the word “context” appears twenty-one times, underscoring that in determining whether content can be displayed or a user can be permitted to post on Facebook, one-size-fits-all rules will not do. “Context” is a signal that a legal regime relies on fuzzy standards—which have the virtue of customization and the vice of vagueness—rather than clear rules—which have the mirroring virtue of precision and vice of misfiring or disproportionality in particular cases.
Put differently, the Board’s own decision leads to the conclusion that it is demanding something impossible from Facebook: clear rules that provide advance guidance and predictability while also allowing sufficient flexibility to Facebook’s high-level moderators to take appropriate account of the innumerable and varied contexts in which free speech can conflict with other worthy values. Facebook can exchange some amount of predictability for some degree of contextualization, but it cannot simultaneously optimize both.
The Board’s distinguished members should know that much already. If Facebook’s leadership team does not, it will soon find out.