🚨 The tier list: How Facebook decides which countries need protection

PLUS: some thoughts on the Facebook Papers

🚨 The tier list: How Facebook decides which countries need protection
NASA / Unsplash

At the end of 2019, the group of Facebook employees charged with preventing harms on the network gathered to discuss the year ahead. At the Civic Summit, as it was called, leaders announced where they would invest resources to provide enhanced protections around upcoming global elections — and also where they would not. In a move that has become standard at the company, Facebook had sorted the world’s countries into tiers.

Brazil, India, and the United States were placed in “tier zero,” the highest priority. Facebook set up “war rooms” to monitor the network continuously. They created dashboards to analyze network activity and alerted local election officials to any problems. 

Germany, Indonesia, Iran, Israel and Italy were placed in tier one. They would be given similar resources, minus some resources for enforcement of Facebook’s rules and for alerts outside the period directly around the election. 

In tier two, 22 countries were added. They would have to go without the war rooms, which Facebook also calls “enhanced operations centers.” 

The rest of the world was placed into tier three. Facebook would review election-related material if it was escalated to them by content moderators. Otherwise, it would not intervene. 

The system is described in disclosures made to the Securities and Exchange Commission and provided to Congress in redacted form by Frances Haugen’s legal counsel. A consortium of news organizations, including Platformer and The Verge, has obtained the redacted versions received by Congress. Some documents served as the basis for earlier reporting in the Wall Street Journal.

The files contain a wealth of documents describing the company’s internal research, its efforts to promote users’ safety and well being, and its struggles to remain relevant to a younger audience. They highlight the degree to which Facebook employees are aware of the gaps in their knowledge about issues in the public interest, and their efforts to learn more. 

But if one theme stands out more than others, it’s the significant variation in content moderation resources afforded to different countries, based on criteria that are not public or subject to external review. For Facebook’s home country of the United States, and other countries considered at high risk of political violence of social instability, Facebook offers an enhanced suite of services designed to protect the public discourse: translating the service and its community standards into the official languages; building AI classifiers to detect hate speech and misinformation in those languages; and staffing teams to analyze viral content and respond quickly to hoaxes and incitement to violence on a 24/7 basis. 

Other countries, such as Ethiopia, may not even have the company’s community standards translated into all of its official languages. Machine learning classifiers to detect hate speech and other harms are not available. Fact-checking partners don’t exist. War rooms never open. 

For a regular company, it’s hardly controversial to allocate resources differently based on market conditions. But given Facebook’s key role in civic discourse — it effectively replaces the internet in some countries — the disparities are cause for concern. 

For years now, activists and lawmakers around the world have criticized the company for the inequality in its approach to content moderation. But the Facebook Papers offer a detailed look into where Facebook provides a higher standard of care — and where it doesn’t. 

Among the disparities:

- Facebook lacked misinformation classifiers in Myanmar, Pakistan, and Ethiopia, countries designated at highest risk last year.

- It also lacked hate speech classifiers in Ethiopia, which is in the midst of a bloody civil conflict.

- In December 2020, an effort to place language experts into countries had only succeeded in six of ten “tier one” countries, and zero tier two countries.

Miranda Sissons, Facebook’s director of human rights policy, told me that allocating resources in this way reflects the best practices suggested by the United Nations in its Guiding Principles on Business and Human Rights. Those principles require businesses to consider the human rights impact of their work and work to mitigate any issues based on their scale, severity, and whether the company can design an effective remedy for them. 

Sissons, a career human rights activist and diplomat, joined Facebook in 2019. That was the year the company began developing its approach to what the company calls “at-risk countries” — places where social cohesion is declining, and where Facebook’s network and powers of amplification risk incitements to violence. 

The threat is real: other documents in the Facebook Papers detail how new accounts created in India that year would quickly be exposed to a tide of hate speech and misinformation if they followed Facebook’s recommendations. (The New York Times detailed this research on Saturday.) And even at home in the United States, where Facebook invests the most in content moderation, documents reflect the degree to which employees were overwhelmed by the flood of misinformation on the platform leading up to the January 6 Capitol attack. (The Washington Post and others described these records over the weekend.)

Documents show that Facebook can conduct sophisticated intelligence operations when it chooses to. An undated case study into “adversarial harm networks in India” examined the Rashtriya Swayamsevak Sangh, or RSS, a nationalist, anti-Muslim paramilitary organization, and its use of groups and pages to spread inflammatory and misleading content. 

The investigation found that a single user in the RSS had generated more than 30 million views. But the investigation noted that to a large extent, Facebook is flying blind: “Our lack of Hindi and Bengali classifiers means much of this content is never flagged or actioned.” 

One solution could be to penalize RSS accounts. But the group’s ties to India’s nationalist government made that a delicate proposition. “We have yet to put forth a nomination for designation of this group given political sensitivities,” the authors said. 

Facebook likely spends more on integrity efforts than any of its peers, though it is also the largest of the social networks. Sissons told me that ideally, the company’s community standards and AI content moderation capabilities would be translated into every country where Facebook is operating. But even the United Nations supports only six official languages; Facebook has native speakers moderating posts in more than 70.

Even in countries where Facebook’s tiers appear to limit its investments, Sisson said, the company’s systems regularly scan the world for political instability or other risks of escalating violence so that the company can adapt. Some projects, such as training new hate speech classifiers, are expensive and take many months. But other interventions can be implemented quicker.

Still, documents reviewed by The Verge also show the way that cost pressures appear to affect the company’s approach to policing the platform. 

In a May 2019 note titled “Maximizing the Value of Human Review,” the company announced that it would create new hurdles to users reporting hate speech in hopes of reducing the burden on its content moderators. It also said it would automatically close reports without resolving them in cases where few people had seen the post or the issue reported was not severe. 

The author of the note said that 75 percent of the time reviewers found hate speech reports did not violate Facebook’s community standards, and that reviewers’ time would be better spent proactively looking for worse violations. 

But there were concerns about expenses as well. “We’re clearly running ahead of our [third-party content moderation] review budget due to front-loading enforcement work and will have to reduce capacity (via efficiency improvements and natural rep attrition) to meet the budget,” the author wrote. “This will require real reductions in viewer capacity through the end of the year, forcing trade-offs.” 

Employees have also found their resources strained in the high-risk countries that the tier system identifies.

“These are not easy trade-offs to make,” notes the introduction to a note titled “Managing hostile speech in at-risk countries sustainably.” (Facebook abbreviates these countries as “ARCs.”)  

“Supporting ARCs also comes at a high cost for the team in terms of crisis response. In the past months, we’ve been asked to firefight for India election, violent clashes in Bangladesh, and protests in Pakistan.” 

The note says that after a country is designated a “priority,” it typically takes a year to build classifiers for hate speech and to improve enforcement. But not everything gets to be a priority, and the trade-offs are difficult indeed.

“We should prioritize building classifiers for countries with on-going violence … rather than temporary violence,” the note reads. “For the latter case, we should rely on rapid response tools instead.” 

After reviewing hundreds of documents and interviewing current and former Facebook employees about them, it’s clear that a large contingent of workers within the company are trying diligently to rein in the platform’s worst abuses, using a variety of systems that are dizzying in their scope, scale, and sophistication. It’s also clear that they are facing external pressures over which they have no control — the rising right-wing authoritarianism of the United States and India did not begin on the platform, and the power of individual figures like Donald Trump and Narendra Modi to promote violence and instability should not be underestimated.

And yet it’s also hard not to marvel once again at Facebook’s sheer size; the staggering complexity of understanding how it works, even for the people charged with operating it; the opaque nature of systems like its at-risk countries “work stream,” and the lack of accountability in cases where, as in Myanmar, the whole thing spun violently out of control.

Some of the most fascinating documents in the Facebook Papers are also the most mundane: cases where one employee or another wonders out loud what might happen if Facebook changed this input to that one, or ratcheted down this harm at the expense of that growth metric. Other times the documents find them struggling to explain why the algorithm shows more “civic content” to men than women, or why a bug let some violence-inciting group in Sri Lanka automatically add a half a million people to a group — without their consent — over a three-day period.

There is a pervasive sense that, on some fundamental level, no one is entirely sure what’s going on.

In the documents, comment threads pile up as everyone scratches their heads. Employees quit and leak them to the press. The communications team reviews the findings and writes up a somber blog post, and affirms that There Is More Work To Do. 

Congress growls. Facebook changes its name. The world’s countries, neatly arranged into tiers, hold their breath.


About the Facebook Papers

The above story comes from access I had to the Facebook Papers. Platformer was the only independent outlet to be invited to be part of the consortium of publications to receive the documents, and I’ve spent the past two weeks reading hundreds of pages of internal discussions, charts, and other materials that Frances Haugen first shared with the Wall Street Journal. I also worked with colleagues at The Verge, who today produced:

I continue to receive new documents every week day, and have been told to expect more until the end of November. The documents arrive with no particular eye toward organization or theme. To take a few items at random, today’s dump, which arrived after this morning’s embargo lift for the Facebook Papers, includes: a 24-page essay exploring the pros and cons of ranking feeds from anonymous employee in 2018; a 2019 look at the network effects of enforcing hate speech policy; and an undated postmortem about disrupting a human trafficking network in Saudi Arabia.

To understate the case here: as a journalist who has covered Facebook for nearly a decade, it’s extraordinary to be able to read these documents and learn more about the company. Particularly because, in the majority of cases, the documents were created by people working to address — rather than exacerbate — harms on the platform. The documents Haugen shared with Congress are, effectively, the story of internal resistance: of a rebellion that the company’s leadership did not appear to see coming, even as the pace of leaks accelerated from a trickle to a torrent over the past few years.

It’s also clear that Haugen has become extremely useful to an array of interest groups who, for one reason or another, would like to see Facebook disappear — for societal reasons, corporate reasons, or both. And that the press, which is generally predisposed to assuming the worst about Facebook but has lacked internal data to support its concerns, will now make matching documents to its prior assumptions a central editorial project of the next six weeks.

Perhaps you sense my discomfort here: I relished a chance to see these documents for myself, and lamented the fact that in doing so I would be used as a pawn in a larger game. (Every source has an agenda; as a journalist, though, you should strive to know whose game you are playing.) Reading the documents, I found myself most concerned about the disparity with which Facebook deploys its moderation resources around the world, which could serve to enforce existing inequalities, and the generally opaque nature of the way it approaches this work. But I was also struck, over and over, by how many people were working to change that.

Platformer readers are likely to be the folks sifting through today’s 53 or so stories to find the nuances here. To the average person, though, I suspect it all just looks like a mushroom cloud: a miasma of anti-Facebook coverage that will pressure regulators to intervene and do … something, which has been Haugen’s project from the beginning. The clever decision to continue releasing documents for the next several weeks raises the likelihood that this pressure will escalate as the weeks go on. The document release strategy is, effectively, a drip marketing campaign for the Haugen project overall. (Today’s tour stop was in the United Kingdom, where she testified.)

I will continue to read and write about any Facebook Papers I find newsworthy here. But it seems to me that this is the moment to expand the lens of our coverage. The basic ground of the document dump is now well trod. (A host of new publications joined the consortium today, ensuring another volley of coverage designed to squeeze more juice from the rind.) Meanwhile, the people backing Haugen — and their goals — are beginning to come into focus, just as regulators begin licking their lips at the prospect of using her testimony to achieve long-desired (and potentially terrible) goals. And those goals could have huge consequences for the internet at large.

Since I joined the consortium, the question of who is using whom, and for what, has been nagging at me. As we enter this next phase, it’s time for that question to come to the fore.


Owing to all the above, I’m throwing today’s links into Sidechannel’s links room without commentary. The #platformer-links channel is open to all paid subscribers; upgrade your subscription today and I’ll email you the link.

If you only have time for one other story today, I recommend this one about Facebook leaking 6,500 gallons of drilling fluid into the ocean.


Those good tweets


Talk to me

Send me tips, comments, questions, and your takeaways from the Facebook Papers: casey@platformer.news.