February 9, 2021

Facebook’s content moderation team makes more than three million decisions every day about whether a post breaks its various content rules, ranging from those on nudity to hate speech. So on paper, there is nothing particularly remarkable about five such decisions last month; they concerned nudity in Brazil, Covid misinformation in France and hate speech in Russia, the US and Myanmar.

But while these are the kind of issues Facebook’s tens of thousands of moderators face on a daily basis, these five could mark the start of a new era of governance for the entire internet. They are, in effect, the first rulings of the site’s contentious new Oversight Board.

Facebook has for years been searching for a simple solution to the problems thrown up by its enormous growth; it alone has two billion users, while Instagram (which it also owns) has nearly as many. But who should be in charge of uniformly monitoring their posts?

Critics have long warned that no company — let alone one largely controlled by Mark Zuckerberg — should have that much power. At the same time, Facebook has been reluctant to let individual governments introduce onerous laws governing its operation — for the simple reason that it could leave the website with 190 or more separate legally binding rulebooks to enforce. It has claimed to be open to countries cooperating to come up with such a set of rules, but, unsurprisingly, that has not happened.

So into that gulf Facebook has created an Oversight Board, which it has endowed with $130 million for its first years of operation. Predictably the board, which is supposedly independent and has the power to reverse the company’s decisions, has faced fervent criticism from the outset. Long before it first made any of its rulings, critics set up an opposing “Real Facebook Oversight Board”, triggering a legal war of words with the social media giant over whether or not they could use that name.

The early consensus seemed clear: the Oversight Board would not be sufficiently independent, would be comprised of patsies and should be written off early. Yet the real picture seems far more complex: after all, if Facebook had wanted to set up an organisation to act as a fig leaf for accountability, it could have done so far more easily and without such an excessive price tag.

For a start, Facebook is unable to pull the funding for the Oversight Board for at least six years — meaning that it is committed to spending more than $20 million a year for that period. At that point, the Board should in theory be well established, so that if it has proven itself effective and Zuckerberg pulls the plug, that will be visible to the world’s media. Not a good look.

As for Facebook’s claims about the Board’s independence, there do seem to be legitimate concerns to be had about the company’s ability to refer cases to the Board and require it to rule on them. But for the most part, the Board members are able to select their own cases from the thousands sent their way by user appeals. It is also able to draw upon whatever human rights and expression principles it chooses to shape its decision making.

But funding and terms of reference can only get you so far. The real question is who is chosen to make up the membership of the Oversight Board. And here comes a catch — the first batch are chosen by Facebook, as they inevitably would be directly or indirectly (if someone else is going to pick the Board members, Facebook has to pick them). But future Board members will be chosen by current ones.

So has the company loaded up the Oversight Board with loyalists? The honest answer is that it doesn’t seem so. Quite a few people among the first 20 are public critics of the site, and most of the others don’t seem like its natural allies. Law professor Jamal Greene has publicly stated “there’s lots of reasons” not to trust Facebook, while professor Evelyn Aswad has published multiple papers calling on social giants to align themselves with international human rights law.

Only five of the first 20 Board members are from the USA, while ten are women. They reach across the political spectrum, though most come from a background in human rights, freedom of expression or international law. But the Board also includes former Prime Minister of Denmark Helle Thorning-Schmidt, and Alan Rusbridger, editor of The Guardian when it published Edward Snowden’s revelations on NSA surveillance and Big Tech’s complicity therein. (Disclosure: I worked for The Guardian on that project.)

More importantly, the Board’s first five rulings have hardly gone in Facebook’s favour: indeed, it ruled against the social media giant in four of them. One such ruling reinstated a French post advocating for the use of hydroxychloroquine to treat Covid-19. The Board didn’t find the post to be accurate — it agreed it was likely misinformation — but found it didn’t meet the site’s criteria for posing a risk of “imminent harm”. Instead it concluded the rules themselves were “inappropriately vague and inconsistent with international human rights standards”.

Certainly it’s encouraging to see the Oversight Board making a robust attempt to hold Facebook to account and force it to improve its rulebook. In time, this will no doubt create a set of rules tested and tempered by human rights experts around the world. Indeed, it’s not difficult to imagine how this could easily become the model rulebook — a constitution of sorts — for the site, and from there for the entire internet. If other tech companies see this system work out for one platform and, more importantly, reduce the criticism it receives, they will likely adopt it themselves.

And that would probably represent a real improvement to today’s reality. But make no mistake: the biggest winner here would still be Facebook. It would get, through a process that it controlled, the chance to create a rulebook based on the smallest amount of action; what would be the slimmest and cheapest way to enforce a collection of rules that would get lawmakers and activists off its back?

As a process, it’s like allowing a defendant to not only choose their judge and jury, but to also help write the criminal code before their case begins. However much you try to maintain an appearance of fairness, everyone is still going to suspect you’ve stacked the deck to help yourself.

To switch metaphors, think about which of two dictators is the cannier operator. Is it the one who pumps out blatant propaganda on television, jails serious political opponents and wins every election with 95% of the vote? Or is it the one who allows just enough dissent, lets rival political parties exist, but makes sure that come what may he’ll always get enough votes to stay in power?

At the end of the day, it’s worth remembering that Facebook makes tens of billions in profits each year — so $130 million over six years would be an extremely low-cost way of perpetuating something close to the status quo. For if this effort is successful, the benefits will be immeasurable.  It would allow the company to restrict scrutiny over which content should be permissible to case-by-case examples. It would stifle discussion of how its algorithms work, and what effects they have. And more importantly, if the Oversight Board is used by Facebook to resist efforts by nation states to create their own legislation, it would be essentially elevating itself to a position higher than democratically elected governments.

Activists are understandably worried about the prospect of a toothless Facebook Oversight Board stacked with patsies. Perhaps they should be more worried about the effects of one that actually does its job.