Where Meta's biggest experiment in governance went wrong

Five years after the Oversight Board's creation, few are satisfied with the result. Can it be saved?

Where Meta's biggest experiment in governance went wrong
mundissima / Shutterstock.com

Five years ago this week, Meta's Oversight Board accepted its first cases. Together, they highlighted the company's global reach — cases originated in Malaysia, Azerbaijan, and Brazil, among other countries — and the high-stakes hair-splitting that Meta's content moderation apparatus attempts to navigate. When is it OK for a woman's nipple to appear on Facebook? Can you quote Goebbels, if it's actually a commentary on rising fascism in America? At what point does a veiled threat against the president of France become an incitement to violence?

Over the past half-decade, the Oversight Board has sought to make decisions like these more consistently, and in public. In more than 200 published decisions, and 317 policy recommendations to Meta, the board has sought to draw brighter lines around what is and is not allowed.

The Oversight Board emerged from a series of crises, including the Rohingya genocide, Cambridge Analytica, and the larger backlash against Facebook following Donald Trump's election as president in 2016. At the time, CEO Mark Zuckerberg had the final say over the fate of every post on his platforms; the Oversight Board represented an effort to restore public trust by creating a check on that power.

A retrospective on its first five years published by the board today documents the results of its efforts, including a push to allow Iranian protesters to post "death to Khamenei" as political speech, and an agreement from Meta to tell you which specific rule your post violated when removing it. The board also led an inquiry that resulted in Meta acknowledging its over-moderation of Palestinian content in 2021 had an "adverse human rights impact" on Palestinians' free expression.

Meta, for its part, has funded the board through the next two years.

At the same time, it seems likely that most users of Facebook and Instagram still have little to no idea that the board exists. The board was at its most prominent in 2021, when Meta asked it to consider whether Trump should be permanently banned for his actions related to the January 6 Capitol riots. But the board punted that decision back to Meta, and since then has largely faded from public view.

The board has done some good work. But it has taken on disappointingly few cases, and can sometimes take the better part of a year to render a decision, even when the post in question has credibly incited political violence. (I noted that company's press release today says it will release a YouTube video about its impact next week, suggesting it even missed the deadline for a promotional video it had five years to make.)

Over time, the founding promise of the Oversight Board — that it would serve as a kind of Supreme Court for content moderation, a judicial branch coequal to Meta's executive — has been revealed as a fantasy. And yet the board does push Meta on human rights issues, particularly outside the United States, and wins praise from civil society groups for giving them a place to channel their advocacy.

Almost everyone I've spoken to on the subject says that they are in some ways disappointed by the Oversight Board's performance. And — often in the same breath — they will tell me that it beats the alternative. Particularly during a year when Meta abruptly sidelined its policy team, empowering lobbyists to rewrite community guidelines on the fly to curry favor with the Trump administration.

As it enters its next five years, then, the board faces a moment of reckoning. Its early dream that other platforms would hire it to do for them what it does for Meta has not yet come to pass. And what it does accomplish for the average Meta user isn't always clear.

Meanwhile, where Zuckerberg once saw the board as a shield against the threat of onerous regulation, he has since found in Trump a president who will happily advocate for his interests in exchange for a few million dollars and an embrace of the administration's culture war against transgender people, immigrants, and others.

"There was actually a period of time in which Mark thought it was both in his best interest and the right thing to do," said Kate Klonick, a law professor at St. John's University, who chronicled the Oversight Board's development, of Zuckerberg's mindset in 2020. But that was also a time when tech companies felt more vulnerable to US regulators, she said — "and part of that was because they couldn't buy the White House."

II.

Board members I've spoken with acknowledge the limitations in what they have been able to accomplish so far. (And politely duck the question about whether what Zuckerberg and Meta want from them has changed over the past five years.)

Making the board more effective might begin with letting go of the vision Zuckerberg laid out for it before its founding: as a Supreme Court that would create a series of binding precedents. Whatever power Meta has been willing to extend to its board, it has reserved the right to rewrite policy as it sees fit, and against the board's recommendations, often without bothering to explain its actions in any depth.

As a result, the board's rulings on the relatively small number of cases that it hears have generally had limited impact.

"It's always been a little bit of a fiction that the individual decisions on pieces of content are themselves very meaningful," Paolo Carozza, a law professor at Notre Dame and co-chair of the Oversight Board, told me in an interview. "We all understand more and more that each case is only really meaningful and helpful as a tool for leveraging the board's influence if we really do a good job of linking the cases to systemic, wide issues."

Carozza told me he had never liked analogizing the Oversight Board to a court. Among other things, the analogy pushed the board to act like a court — aloof and hesitant to comment on Meta's actions except when directly connected to a case or policy advisory. Courts, after all, do not comment on cases before they are decided, or issue statements when companies behave badly. Instead, they wait for cases to arrive and then rule narrowly on the facts before them.

"I think that has constrained us," Carozza said.

This judicial posture may have granted the board a certain legitimacy, particularly in its first year. Klonick told me she has been impressed with the quality of thinking on display in the board's decisions. "The writing is more thoughtful" than she thought it would be, she told me. "It's more legal and rigorous."

On the other hand, the events of 2025 have challenged the idea that the board is providing real oversight. A series of cascading revelations about the company has been met with only the most timid of statements by the board — or, more often, silence.

In January, after the company announced it would create new categories of permitted hate speech to impress the Trump administration, the board — which had been blindsided by Meta's announcement — issued a bizarrely upbeat statement saying it "welcome[d] the news that Meta will revise its approach to fact-checking, with the goal of finding a scalable solution to enhance trust, free speech and user voice on its platforms." When it was called to rule on two closely watched cases involving anti-trans rhetoric, a divided board ruled in favor of Meta's decision to leave them up.

At least it engaged with that issue. In the months since, we have learned that Meta's content policies let its AI bots engage in "sensual" roleplay with children. It let users create chatbots using celebrity likenesses without permission, which then made frequent sexual advances. We have current and former employees testify that the company suppressed research on child safety in VR, and a judge say that Meta lawyers ordered staff to block or delete internal research into the mental health of teens on its platforms to reduce legal liability.

To cap off the year, we learned last month that Meta's internal projections showed that it would earn $16 billion in 2024 from scams and ads for banned goods; that a third of all successful scams in the US take place on its platforms; and that users had to be caught attempting to traffic people for sex 17 times before their accounts would get banned.

Meta denies or disputes much of the above, even though a huge portion of those findings come from its own employees. At a minimum, though, these findings would seemingly demand some sort of response from a board entrusted with overseeing the human rights of Meta's billions of users. None, though, has been forthcoming.

"I totally agree with you, in a sense, that there's a lot more room for the board to speak more generally about issues," Carozza said. The board is planning stakeholder consultations on child safety issues, he noted, and hopes to say more about the subject in the coming year.

In the meantime, the board's silence can risk appearing to be a comment on its own independence. Given that its existence relies on money from Meta — which funds it only a couple years at a time — critics have long questioned how stridently the board would be willing to criticize its patron. Data on that question from the first five years does not look great.

Klonick said the funding structure may be at fault.

"It limits their willingness to push back," she said. "Because even for a lot of the people on the board, it's just a very nice paycheck, and they'd rather not give up that paycheck."

III.

That said, the board has also accomplished things that matter, Klonick told me.

"Before the board existed, it was real black magic for civil society, governments, every type of group to have a voice at these platforms when something happened," she said. "The one huge benefit of the board has been a mechanism to basically have a direct voice — a consolidated place to express signals and do it in a transparent way. So it's not just 'does anyone know anyone at Facebook? Let's call them.'"

She also pushed back on the court analogy from a different direction than Carozza. The board, she said, functions less like the U.S. Supreme Court and more like a European inquisitorial body — "surveying as widely as possible the various kinds of values that they think they have to preserve."

Carozza noted — correctly — that the board's impact has been greatest outside the United States. The board often takes up cases that the US press would likely never write about, and which might have otherwise languished in Meta's unfathomable automated systems.

"When you look at this at a global scale, and especially all across the global South," he said, "I think the value of the board is especially strong there."

The darker suggestion in that idea is that the board has been able to do the most good in regions where Meta's leadership cares the least. Through no fault of its own, the board operates in a world where an unspoken rule holds that Meta content policy must not damage its relationship with the US government. How do you provide "oversight" over that? Particularly when the Trump administration is now threatening to deny visas to any foreign worker who ever worked as a content moderator?

IV.

Despite her criticisms, Klonick told me that she isn't ready to give up on the Oversight Board experiment. Asked to grade the board, she landed on a reluctant C.

"This really did not meet my expectations," she told me. "But would I have changed it or decided not to do this at all? Absolutely not. I still think it was a project worth doing, and it's not completely without hope yet as a model."

Particularly because the need that the board was created to serve remains as great as ever. "It's bad for government to control speech," Klonick said. "And it's bad for billionaires to control speech. And it was always really, really important for users to have a mechanism of direct impact and control."

At its best, the Oversight Board has been that mechanism. But it has too rarely been at its best.

It is also only one actor in a broader tech ecosystem — and US government — that has retreated from the work of protecting human rights.

Speaking about the board, Carozza told me, "It only makes sense in a larger ecosystem and culture of wanting to protect human rights and wanting to protect human dignity and doing the right thing."

Ultimately, I remain convinced that the board has been a useful experiment. But five years of pretending to be a court has not given Meta's user base much that you could call justice. If it's serious about achieving that mission, it's time for the board to try being something else.

On the podcast this week: Kevin and I go deep on OpenAI's Code Red, Gemini 3, and Opus 4.5. Then, it's time once again for the Hard Fork Review of Slop.

Apple | Spotify | Stitcher | Amazon | Google | YouTube

Sponsored

New Grooming detection added! Safer by Thorn is a purpose-built child sexual abuse material (CSAM) and exploitation (CSE) solution powered by trusted data and Thorn’s issue expertise. 

The Safer Predict text classification model now detects messages and conversations that contain signs of suspected grooming. When indicators of sexual exploitation or abuse of a minor are detected, the model applies a “grooming” label and confidence score to each message.

Learn how Safer by Thorn can help you mitigate the risk of your platform hosting CSAM or being misused to sexually exploit children.

Following

The new LLM skepticism

What happened: AI researcher Yann Lecun made his first comments on Thursday about the world-model startup he’s leaving Meta to found. And while he has long been a skeptic of the idea that large language models will take us all the way to superintelligence, his comments underscore a fresh flare-up of skepticism about LLMs among prominent practitioners and observers.

“Silicon Valley is completely hypnotized by generative models,” LeCun said at the AI-Pulse event in Paris on Thursday. “So you have to do this kind of work outside of the Valley, in Paris.

Last week, on his podcast with OpenAI co-founder Ilya Sutskever, Dwarkesh Patel raised his frequent criticism that LLMs have shown little progress in continual learning: the ability for AI systems to learn from experience the way humans do, rather than having to be updated via tedious reinforcement learning methods. Sutskever agreed that new approaches are necessary, and faintly suggested that his company, Safe Superintelligence, may be working on them.

In a follow-up post today, Patel elaborates on the slow progress in continual learning and why he thinks it will delay the arrival of AGI by a "decade or two.” “Either these models will soon learn on the job in a self-directed way — making all this pre-baking pointless — or they won’t — which means AGI is not imminent,” he wrote.

If people don’t think continual learning is necessary for truly powerful AI, “people are underrating how much company- and context-specific skills are required to do most jobs.”

This weekend at AI conference NeurIPS, Turing award winner Richard Sutton gave a talk on the same themes.

“We need agents that learn continually. We need world models and planning. We need knowledge that is high-level and learnable,” he argued. “As AI has become a huge industry, to an extent it has lost its way.”

Why we’re following: Despite the recent impressive performance gains in Google's Gemini 3 and Anthropic's Claude Opus 4.5, there's still plenty of skepticism in Silicon Valley about the ultra-short timelines of frontier lab CEOs. While they insist that AGI may be no more than a few hundred days away, other practitioners are highlighting the obvious gaps along the way.  

 What people are saying: AI researcher François Chollet agreed with Dwarkesh’s argument. In a post on X, he said, “Either you crack general intelligence — the ability to efficiently acquire arbitrary skills on your own — or you don't have AGI.”

On Nayeema Raza’s podcast, Turing Award winner Geoffrey Hinton was asked to explain why Yann LeCun is so excited about world models. The intuition behind the idea: “Just learning it from language seems kind of absurd when you could actually look at the world and interact with it,” Hinton said

Asked how many years are left before AGI, Hinton said we still have some ways to go. “I think it'll be within 20," he said. "I think 10 isn't a bad estimate.”

—Ella Markianos


Meta poaches Apple’s Alan Dye 

What happened: Meta poached Apple’s top designer Alan Dye, who was the man behind the interface of the Vision Pro headset and also led the Liquid Glass redesign on Apple’s operating systems. He will be replaced at Apple by longtime designer Stephen Lemay.

Dye will lead a newly-created creative studio in the company’s Reality Labs division, Meta CEO Mark Zuckerberg said, where he will be joined by another former Apple designer Billy Sorrentino to “bring together design, fashion, and technology” in Meta’s next generation of products.

Dye’s move is the latest in a string of high-profile departures from Apple. Its longtime COO Jeff Williams retired last month, AI head John Giannandrea is stepping down after years of struggling to catch up in the AI race, and its general counsel and top sustainability executive both stepped down today. Dye's move represents another design loss for Apple following Jony Ive’s exit in 2019, whose company io was acquired by other AI competitor OpenAI

Why we’re following: Bloomberg’s coverage positioned this as a huge get for Meta, as Apple is still struggling to deliver on its AI promises and competitors are pushing into AI-powered consumer devices. 

But some Apple insiders are quietly thrilled about Dye’s exit. Neither the Vision Pro interface nor Liquid Glass received particularly positive receptions. Sources inside Apple told tech blogger John Gruber that “everyone…is happy — if not downright giddy — at the news that Lemay is replacing Dye.” 

What people are saying: “I think this is the best personnel news at Apple in decades. Dye’s decade-long stint running Apple’s software design team has been, on the whole, terrible — and rather than getting better, the problems have been getting worse,” Gruber wrote.

On the other hand:

“This is a guy Apple would have never gotten rid of no matter what anyone on [X] thinks about Liquid Glass,” Bloomberg managing editor Mark Gurman wrote, standing by Bloomberg’s positioning of the news. “Apple is losing top people like it hasn’t in 30 years.”

Dye’s post announcing his move on Instagram Stories “seems almost designed to offend,” wrote designer Sebastiaan de With in a post on X with nearly 100,000 views. “The horrible “Create Mode” typesetting. Using a Steve Jobs quote to refer to going to Meta. Yikes.”

Others started joking about Apple’s notoriously hard-to-use features that rolled out under Dye’s tenure. Juan Buis, a content designer, wrote in a post with more than 500,000 views and 27,000 likes: “to commemorate Alan Dye moving from apple to meta, here's one of his best quotes.”

—Lindsey Choo

Side Quests

President Trump’s proposal to block state AI regulation is reportedly still stalling. The Trump administration said it would deny H-1B visas to foreign applicants who had worked as content moderators for social platforms, if they were deemed to have "censored" protected expression. 

A group of researchers is urging the pope to take AI safety seriously.

How Palantir shifted from its onetime “progressive values” to its new focus on helping ICE deport people faster.

OpenAI must produce millions of chat logs from ChatGPT in its copyright lawsuit with the New York Times. A California ballot initiative is seeking to reverse OpenAI’s for-profit conversion.

OpenAI is developing a new model codenamed “Garlic” in response to Google’s recent gains, and testing a new way to get its LLM to tell on its own bad behavior. The company agreed to buy startup Neptune, which makes tools for tracking AI training progress. OpenAI’s nonprofit foundation said it’s donating $40.5 million to 208 nonprofits in the US. ChatGPT referrals to retailer apps on Black Friday grew 28% over last year.

CEO Sam Altman has reportedly explored acquiring or partnering with a rocket company. 

Softbank founder Masayoshi Son said he “was crying” over having to sell the firm’s Nvidia stake, and wouldn’t have done it if it didn’t need to bankroll its AI investments. 

Meta is planning to cut 10 to 30 percent of employees in its metaverse division. But the metaverse will continue until morale improves.

A look at TSMC’s transformation of Phoenix, Ariz. into a chip hub only made possible by expertise and money from outside the US. 

AI models have been used to drain $4.6 million worth of smart contracts via exploits, Anthropic researchers said. A look at the document Anthropic used to train Claude 4.5 Opus’s personality. A look at Anthropic’s tight-knit safety team.

Anthropic is acquiring developer tool startup Bun to accelerate Claude Code. Snowflake struck a $200 million deal with Anthropic to make its LLMs available on the company’s platform. CEO Dario Amodei said some AI companies are risking too much by committing to spend hundreds of billions of dollars. Anthropic has reportedly hired lawyers to start work on an IPO that could come as soon as 2026. 

Half of the states in the US now require age verification to watch porn. The UK’s Ofcom fined porn website AVS Group £1 million for failing to ensure children are blocked from viewing their content.

Ireland is investigating TikTok and LinkedIn over whether their content reporting systems let people safely report CSAM. Hundreds of TikTok accounts have garnered billions of views by posting AI slop of anti-immigrant and sexualized material, according to a new report.

India scrapped an effort to force smartphone makers to put a "security" app controlled by the government onto every device after Apple said it would not comply.

Russia blocked access to Roblox and accused it of “LGBT propaganda.” 

The EU is investigating Meta on antitrust grounds over its new policy to restrict other AI providers’ access to WhatsApp. Meta is launching a centralized support hub for Facebook and Instagram users and testing an AI assistant. The company's nightmarish account recovery process remains a perpetual scandal. 

Taiwan is suspending access to China’s Xiaohongshu — aka RedNote — for one year, citing alleged fraud taking place on the platform. Wait til they hear how many scams take place on Facebook Marketplace.

Google is experimenting with AI news headlines, and many of them are nonsensical. The EU is market testing Google’s offer to fix alleged antitrust violations over its ad tech business with various tweaks.

YouTube will comply with Australia’s under-16 social media ban despite initial protests. Creators are concerned YouTube’s AI deepfake tracking tool can be used to train Google’s AI. The next big moneymaker for YouTube creators: slop videos for 1 to 3 year olds. Yikes.

DeepMind has shifted its focus to "pragmatic interpretability." Deep Think mode is now available to Google AI Ultra subscribers.

Waymos are adopting more aggressive driving styles — although its crash statistics suggest the vehicles represent a public health breakthrough, according to this neurosurgeon. 

Microsoft denied a news report that it lowered sales growth targets for some AI products. Sorry to the Copilot sales team.

AWS announced the launch of its Trainium3 custom AI chip, which it says is four times as fast as its previous generation. Amazon will also let cloud clients customize generative AI models for just $100,000 a year. Fire TV’s new Alexa+ feature will let users skip to their favorite movie scenes easily via natural language. Amazon is reportedly preparing to expand its delivery network without USPS after talks stalled. 

Wikipedia cofounder Jimmy Wales said the company is seeking more AI licensing deals similar to what it has with Google. I'll bet. 

Reddit CEO Steve Huffman said r/popular “sucks” and will be replaced with a better feed.

Discord now lets users buy and gift digital items for games.

CNN struck a partnership with prediction market Kalshi as part of our country's effort to fully embed gambling across every surface of public life.

The “AI and decision-making major” is now the second-most popular major at MIT. How AI chatbots developed their distinctive, grating voice. Startups are building replicas of websites from scratch to generate new data. People are uploading their medical records to AI chatbots despite privacy concerns. 

Those good posts

For more good posts every day, follow Casey’s Instagram stories.

(Link)

(Link)

(Link)

(Link)

Talk to us

Send us tips, comments, questions, and Oversight Board cases: casey@platformer.news. Read our ethics policy here.