How Facebook undercut the Oversight Board
What really happened between the company and the board over Russia and Ukraine
Today let’s talk about the highest-profile conflict to date between Meta and its Oversight Board, an independent organization the company established to help it navigate the most difficult questions related to policy and content moderation.
Since before the board was created, it has faced criticism that it primarily serves a public-relations function for the company formerly known as Facebook. The board relies on funding from Meta, it has a contractual relationship with it governing its use of user data, and its founding members were hand-picked by the company.
Aiding in the perception that it’s mostly a PR project is the fact that to date, Meta and the board have rarely been in conflict. In the first quarter of its existence, of 18 recommendations the board made to Meta, the company implemented 14. And even though it often rules against Facebook’s content moderators, ordering removed posts to be restored, none of those reversals has generate any significant controversy. (Also, from Facebook’s perspective, the more the board reverses it, the more credible it is, and thus the more blame it can shoulder for any unpopular calls.)
That’s what made statements published by both sides today so noteworthy.
After Russia’s invasion of Ukraine in February, Meta had asked the board to issue an advisory opinion on how it should moderate content during wartime. The conflict had raised a series of difficult questions, including under what circumstances users can post photos of dead bodies or videos of prisoners of war criticizing the conflict.
And in the most prominent content moderation question of the invasion to date, Meta decided to temporarily permit calls for violence against Russian soldiers, Vladimir Putin, and others.
All of which raised important questions about the balance between free expression and user safety. But after asking the board to weigh in, Meta changed its mind — and asked board members to say nothing at all.
Late last month, Meta withdrew a policy advisory opinion (PAO) request related to Russia’s invasion of Ukraine that had previously been referred to the Oversight Board. This decision was not made lightly — the PAO was withdrawn due to ongoing safety and security concerns.
While the PAO has been withdrawn, we stand by our efforts related to the Russian invasion of Ukraine and believe we are taking the right steps to protect speech and balance the ongoing security concerns on the ground.
In response, the board said in a statement that it is “disappointed” by the move:
While the Board understands these concerns, we believe the request raises important issues and are disappointed by the company’s decision to withdraw it. The Board also notes the withdrawal of this request does not diminish Meta’s responsibility to carefully consider the ongoing content moderation issues which have arisen from this war, which the Board continues to follow. Indeed, the importance for the company to defend freedom of expression and human rights has only increased.
Both statements were extremely vague, so I spent the day talking with people familiar with the matter who could fill me in on what happened. Here’s what I’ve learned.
One of the most disturbing trends of the past year has been the way that authoritarian governments in general, and Russia in particular, have used the intimidation of employees on the ground to force platforms to do their bidding. Last fall, Apple and Google both removed from their respective stores an app that enabled anti-Putin forces to organize before an election. In the aftermath, we learned that Russian agents had threatened their employees, in person, with jail time or worse.
Life for those employees — and their families — has only become more difficult since Putin’s invasion. The country passed draconian laws outlawing truthful discussion of the war, and the combination of those laws and sanctions from the United States and Europe has forced many platforms to withdraw services from Russia entirely.
In the wake of Meta’s decision to allow calls for violence against the invaders, Russia said it had engaged in “extremist” activities. That potentially put hundreds of Meta employees at risk of being jailed. And while the company has now successfully removed its employees from the country, the extremism language could mean that they will never be allowed to return to the country so long as they work at Meta. Moreover, it could mean that employees’ families in Russia could still be subject to persecution.
There is precedent for both outcomes under Russia’s extremism laws.
So what does the Oversight Board have to do with it?
Meta had asked for a fairly broad opinion about its approach to moderation and Russia. The board has already shown a willingness to make expansive policy recommendations, even on narrower cases submitted by users. After asking for the opinion, the company’s legal and security teams became concerned that anything the board said might somehow be used against employees or their families in Russia, either now or in the future.
Technically, the Oversight Board is a distinct entity from Meta. But plenty of Westerners still refuse to recognize that distinction, and company lawyers worried that Russia wouldn’t, either.
All of this is compounded by the fact that tech platforms have gotten little to no support to date, from either the United States or the European Union, in their struggles to keep key communication services up and running in Russia and Ukraine. It’s not obvious to me what western democracies could do to reduce platforms’ fears about how Russia might treat employees and their families. But discussions with executives at several big tech companies over the past year have made it clear that they all feel like they’re out on a limb.
All that said, today’s news still represents a significant blow to the Oversight Board’s already fragile credibility — and arguably reduces its value to Facebook. The company spent several years and $130 million to create an independent body to advise it on policy matters. To ask that body for its advice — advice that would not even be binding on the company — and then decide belatedly that such advice might be dangerous calls into question the point of the entire enterprise. If the Oversight Board’s only role is to handle the easy questions, why bother with it at all?
Facebook and the board declined to comment to me today beyond their statements. It’s fair to note that despite the reversal here, the company has stood up to Russia in some important ways — including standing by that decision to let Ukrainians call for Putin’s death. Meta could have rolled over for Russia on that one, and chose not to.
At the same time, once again we find that at a crucial moment, Facebook executives fail to properly understand risk and public perception. Russia has been threatening platform employees since at least last September. Whatever danger there was for employees and their families existed well before the moment that Facebook sought an opinion from its board. To realize that only weeks later … well, talk about an oversight.
I’m on record as saying that the Oversight Board has changed Facebook for the better. And when it comes to authoritarians threatening platform employees, tech companies have distressingly few options available to them. The Russia case, in this as in so many other situations, was truly a no-win situation.
But that doesn’t mean it won’t have collateral damage for both Meta and its board. Critics always feared that if the stakes ever got high enough, Facebook would blink and decide to make all the relevant decisions itself. And then Vladimir Putin went and invaded his neighbor, and the critics were proven to be right.
Correction: This article originally stated that Meta had been designated as an “extremist organization.” While a court charged it with participating in extremist activities, it did not designate the organization as extremist.
I enjoyed watching Google’s big (two-hour!) keynote today, which focused on the road to building ubiquitous ambient computing. As always, the company’s focus on building utility sets it apart from its rivals, which spend more time on enabling creativity (Apple) or abstractions like “connection” (Meta). David Pierce has a nice overview of the company’s strategy at The Verge.
I may have some more to say about I/O tomorrow after I digest it a bit. In the meantime, some of the announcements that caught my eye:
A new “immersive view” for Google Maps that shows select cities in rich detail. It added some nice augmented reality features as well.
The News Literacy Project, which works to educate school-age children about misinformation, disinformation and related issues, is seeking a Spanish-speaking head of communications. Given the proliferation of Spanish-language hoaxes, it’s an important role at a critical time. Get the job and you’ll get to work with friend of Platformer and tech industry legend Walt Mossberg, who serves on the board. Check the posting out here.
The European Union formally introduced legislation that would destroy end-to-end encryption, forcing platforms to scan all user messages upon request. Absolutely disastrous. (James Vincent / The Verge)
The Fifth Circuit Court of Appeals voted 2-1 to let Texas enforce an insane law that allows users to sue social media companies for “censoring” them. It’s a great day to be a Nazi in Texas. (Ben Brody / Protocol)
The Securities and Exchange Commission is investigating Elon Musk’s late disclosure of his big stake in Twitter, which saved him tens of millions of dollars. (Dave Michaels / Wall Street Journal)
Will the FTC block Elon Musk’s intended acquisition of Twitter? It may be a little more likely than we might assume. (Josh Sisco and Sarah Krouse / The Information)
Smaller investors may have a path to get a stake in Musk’s Twitter through a series of new special-purpose vehicles that are now being established. (Katie Roof, Nishant Kumar, and Swetha Gopinath / Bloomberg)
Elon Musk’s intention to re-platform Donald Trump could give cover to YouTube, Facebook, and others to do the same. (Cristiano Lima / Washington Post)
SEC Chairman Gary Gensler heightened his criticism of crypto exchanges, saying some of them are trading against their own customers. (Allyson Versprille and Olga Kharif / Bloomberg)
Democrats narrowly confirmed a third appointment to the FTC, giving them a majority and strengthening Lina Khan’s antitrust push. (Emily Birnbaum and Leah Nylen / Politico)
Twitter introduced a policy to combat spam and duplicative content by removing such posts from search and trends. (Aisha Malik / TechCrunch)
Twitter also introduced a browser game that explains its privacy settings (?) (Jay Peters / The Verge)
San Francisco police are using driverless cars as mobile surveillance cameras. (Aaron Gordon / Vice)
Google, Microsoft and Yahoo are part of a coalition of tech companies that endorsed a proposed New York law that would ban the use of search warrants that can identify you based on your location data and internet searches. Good! (Zack Whittaker / TechCrunch)
Luna holders who staked their coins won’t be able to exit the potentially disastrous situation for weeks. (Tim Copeland / The Block)
Do Kwon, CEO of TerraUSD creator Terraform Labs, reportedly created an earlier failed Stablecoin under a pseudonym. (Sam Kessler and Danny Nelson / CoinDesk)
YouTube is testing a feature that lets you give channel memberships as gifts. (Jay Peters / The Verge)
YouTube Shorts added a TikTok-like “green screen” feature. (Sarah Perez / TechCrunch)
Airbnb showed off a redesign that groups dwellings by category, among other nice changes. (Jordan Crook / New York Times)
Those good tweets
Talk to me
Send me tips, comments, questions, and advisory opinions: firstname.lastname@example.org.