Trust and safety workers on why they're not speaking out
What readers say I got wrong in our last edition

On Thursday I wrote about my visit to TrustCon, where a feeling I have been having all year crystallized: that despite an unprecedented siege on their profession, trust and safety leaders have been remarkably quiet. I asked why, as far as I could tell, no leader in the field had quit their job in protest of policy reversals on hate speech, misinformation, and other issues and talked about it publicly. I described a shift in the field's priorities from human rights-centered practices to a more pragmatic compliance regime. I argued that, whatever the causes for that silence, it looked to the outside world like surrender.
The piece generated more responses than almost any other edition of Platformer. In the days since, I've sifted through dozens of your messages, reached out to many of you to hear more, and added lots of nuance to my own understanding of trust and safety work in 2025.
More notable than the sheer volume of messages, at least to me, is how polarizing the column turned out to be. Many of you wrote to me in agreement, and thanked me for engaging with the subject. "I agree with every word," a former Meta employee wrote.
Matheus Bevilacqua, who worked on trust and safety teams at Uber and Zoom, said the shift I had described is real. "I think trust and safety used to be about creating space for authentic expression, building systems that allowed users to connect without fear or harm," they said. "Now, that vision has been hollowed out and replaced by a cold, compliance-driven approach. The work has shifted from meaningfully protecting people to ticking legal checkboxes and racing to 'win' the AI market. The industry used to ask, 'How can we prevent harm and empower people?' Today, the question is often just, 'What’s the minimum we need to do to avoid fines or regulatory action?'”
Others were quite angry. "All you accomplished here was pissing all over the afterglow of one of the few opportunities for fellowship a very besieged and downtrodden group of people have in the entire year, in what is already the single most difficult year in the history of the field," one TrustCon attendee wrote.
Some readers objected to my tone. "It seems unnecessarily adversarial," wrote Alice Goguen Hunsberger, head of trust and safety at Musubi, on LinkedIn. "This is exactly why T&S people often feel unsafe talking to press. We’re painted as 'failures' even when we are doing so much."
For the record, I don't consider anyone I've written about in this context a failure. To the contrary, ever since I began writing about them frequently in 2019, content moderators have been my heroes. At the same time, I did want to challenge leaders in the industry to consider the consequences of not speaking out — even if they have very good reasons for remaining quiet.
Still, many of you said I gave short shrift to many of the practical and even philosophical reasons that trust and safety leaders have not been speaking out. And so today, in the spirit of healthy debate, I wanted to yield the floor to these readers.
Here, according to you, are the big things I missed.
To criticize rank-and-file trust and safety workers is to criticize the wrong people. A common view in responses is that the true blame for platforms' retrenchment on human rights issues lies with the C-suite and other top executives. To criticize their subordinates is misguided, some readers said. "The people at TrustCon want to speak, want to push forward, want to make things better," a former trust and safety worker at a big platform wrote. "But they're the cogs. They can't always speak, they can't always make the change loudly. Please don't criticize them, and please don't use TrustCon as a way to punch up, because honestly it's just punching down."
It's not just that workers fear for their jobs if they speak out — it's that they fear no one else will hire them if they do. I wrote in my original piece that workers are afraid they will be fired if they speak out against platforms' reversals on hate speech and other issues. The larger issue may be that the industry is contracting overall, leaving workers with fewer options — particularly given that Meta, Google, X, and other platforms all largely aligned on crafting speech policies that would appease the Trump administration.
Many trust and safety workers believe speaking out won't have any practical effect. This one pained me to hear, since journalism is rooted in the idea that galvanizing public attention can lead to positive change. But multiple readers told me they believe that they simply do not have the leverage to change their companies' positions.
Others made a related point — that to the extent trust and safety is effective today, it is largely because workers don't make their case to the press.
"Like you, I’m disappointed by the lack of public conversation about the rollback in online safety policies, and gutting of safety teams across the industry," said Chris Roberts, a trust and safety leader who wrote to me in his personal capacity. "But I think it’s a mistake to expect T&S professionals to serve primarily as public advocates. Our work is — and can only be successful — behind the scenes. Writing enforcement guidance, interpreting policy edge cases, and advocating internally for user safety, within the constraints of the current political climate. The work isn't idealistic or public — it's pragmatic and behind the scenes."
People are scared. I mentioned in my original piece that trust and safety workers often receive threats for the work they do. But several readers thought I had under-weighted both how scared workers are, and how justified those fears are, given the stakes. Multiple readers brought up the terrifying harassment that Yoel Roth faced after quitting his job as head and trust and safety at Twitter.
"There are literally websites dedicated to doxing people who speak out about specific issues, and then those databases then get used by departments like ICE to target people with the full force of the US government," one reader wrote. "There's a huge impact on individuals who speak out. And that is the case under any kind of fascist regime, but my takeaway from your piece was that it punched down at the very people who ARE trying to have these discussions and do something about them."
European tech regulation is better than I seemed to give it credit for. In my original piece, I wrote that one of the primary forces in trust and safety's shift away from a more idealistic, human rights-centered approach is the European Union's Digital Services Act. The law's transparency and safety requirements are beneficial, I wrote, but encouraged platforms to think about trust and safety as a more mundane compliance operation.
One trust and safety worker at a large platform told me that this state of affairs is far preferable to the alternative.
"I'd be way back in the long line of DSA defenders, but what it does do, and what it requires the platforms to do, is raise the floor," they wrote. "It does not serve to lower the ceiling. If the time comes where market conditions again appear to suggest that robust T&S enforcement serves as a competitive advantage, and political winds turn sufficiently, we'll both be able to watch in real time as T&S teams are resourced, reinforced, and renewed. The idealism you eulogize did not work. Full stop. Idealism tempered by pragmatism, enforced by the power of the state? At least something gets done."
I'm grateful to all the readers who wrote in to share their thoughts. And I'm grateful to everyone who continues to work in trust in safety despite the internal turmoil, the external threats, and the occasional obnoxious column in the press. I cover these issues because, like trust and safety workers, I believe that human rights should extend to online spaces. And it's a cruel irony that platforms rolled back protections for their users this year in the name of free expression, while at the same time their own workforces (credibly!) fear reprisals from the US government simply for discussing their work.
And they have lots of company. Academics, nonprofits, and journalists — all historical allies of trust and safety work — are now also facing censorship attacks from House Republicans and the Trump Administration. The cost of speaking out keeps going up. And before long, trust and safety workers may not be the only voices in civil society that are suddenly conspicuous by their absence.
Elsewhere in chilled speech: A look at how liberal nonprofit Media Matters is struggling as it faces a barrage of lawsuits by Elon Musk, along with investigations by Trump’s FTC and Republican state attorneys general, over its constitutionally protected political speech. (Kenneth P. Vogel, Kate Conger and Ryan Mac / New York Times)

Sponsored

Fly.io lets you spin up hardware-virtualized containers (Fly Machines) that boot in milliseconds, run any Docker image, and scale to zero automatically when idle. Whether your workloads are driven by humans or autonomous AI agents, Fly Machines provide infrastructure that's built to handle it:
- Instant Boot Times: Machines start in milliseconds, ideal for dynamic and unpredictable demands.
- Zero-Cost When Idle: Automatically scale down when not in use, so you're only billed for active usage.
- Persistent Storage: Dedicated storage for every user or agent with Fly Volumes, Fly Managed Postgres, and S3-compatible storage from Tigris Data.
- Dynamic Routing: Seamlessly route each user (or robot) to their own sandbox with Fly Proxy and fly-replay.
If your infrastructure can't handle today's dynamic and automated workloads, it's time for an upgrade.

Governing
- People who try to watch porn online in the UK now now need to submit a selfie or a photo ID as websites comply with the Online Safety Act’s requirement to implement age checks. (Jackson Chen / Engadget)
- But the rollout has been a mess, with UK web users presented with selfie checks to visit cider-related subreddits and other innocuous content. (Jess Weatherbed / The Verge)
- Fortunately, age checks on platforms like Reddit and Discord can be easily bypassed with the game Death Stranding’s photo mode. (Tom Warren / The Verge)
- A global wave of age verification laws could threaten free speech, experts warn, and ultimately harm both children and adults. (Matt Burgess and Lily Hay Newman / Wired)
- Meta and other app developers are fighting with Apple and Google over who’s responsible for child safety online as age verification laws pick up across states. (Emily Birnbaum / Bloomberg)
- Apple is expanding its app age-rating system and requiring app developers to answer a new set of age-rating questions to identify sensitive content. (Sarah Perez / TechCrunch)
- DOGE is now using an AI tool, the “DOGE AI Deregulation Decision Tool,” with the goal of analyzing and slashing 50 percent of regulations by the first anniversary of President Trump’s inauguration. (Hannah Natanson, Jeff Stein, Dan Diamond and Rachel Siegel / Washington Post)
- A look at the use of regulatory "sandboxes" in Trump’s AI Action Plan, which seek to to let AI companies test their technologies without regulation. (Aaron Mak / Politico)
- Most land and environmental defenders report experiencing online abuse or harassment related to their work. They named Facebook as the worst platform for abuse, followed by X, WhatsApp and Instagram. (Justine Calma / The Verge)
- An American who helped North Korean undercover agents infiltrate the US to fund Kim Jong Un’s rocket program documents her path to becoming entangled in espionage. (Evan Ratliff / Bloomberg)
- A look at how the AI boom is attracting people and capital back to San Francisco. (Danielle Abril / Washington Post)
- There’s no doctor-patient confidentiality when using ChatGPT as a therapist, Sam Altman warned, as the industry has yet to figure out how to protect users’ sensitive conversations. (Sarah Perez / TechCrunch)
- Meta will no longer accept political, election or social issue ads in the EU, it said, in response to new regulation. (Sara Fischer / Axios)
- An investigation into how Musk reportedly ordered Starlink to cease satellite service when Ukraine was pushing to retake territory from Russia in late 2022. (Joey Roulette, Cassell Bryan-Low and Tom Balmforth / Reuters)
- India is blocking 25 streaming services with millions of viewers for allegedly promoting “obscene” content. (Jagmeet Singh / TechCrunch)

Industry
- The weekend's biggest story was Tea, an app designed to let women anonymously share their dating experiences and categorize the men they date with “red” and “green” flags. (Angela Yang / NBC News)
- The reason it was such a big story is that tens of thousands of women’s selfies and photo IDs leaked online after hackers breached the Tea app. It appears that the app had minimal security. (Kevin Collier and Angela Yang / NBC News)
- A second breach revealed it was possible for hackers to access messages between users discussing abortions, cheating partners and phone numbers. (Emanuel Maiberg and Joseph Cox / 404 Media)
- OpenAI’s GPT-5 is reportedly much better at coding than prior models, both in academic programming problems and in practical programming tasks. (Stephanie Palazzolo / The Information)
- A look at the high-schoolers who solved all six math problems at the International Olympiad, besting OpenAI and Google DeepMind’s AI systems. Will 2025 be the last year kids outcompete the computers? (Ben Cohen / Wall Street Journal)
- Google’s chief scientist and AI leader Jeff Dean has quietly backed 37 AI startups over the past two years, including Perplexity, DatologyAI and Emerald AI. (Sharon Goldman / Fortune)
- Google is testing a vibe-coding tool called Opal, which is now available to US users through Google Labs. (Ivan Mehta / TechCrunch)
- X is testing a way to use Community Notes to highlight well-liked posts from users. (Sarah Perez / TechCrunch)
- Shengjia Zhao, a former OpenAI researcher who worked on the original version of ChatGPT, is now Meta’s chief scientist for its new superintelligence AI group, Mark Zuckerberg announced. As M.G. Siegler notes, though, he was actually hired in June. So why the delayed announcement? (Kurt Wagner / Bloomberg)
- Ray-Ban maker EssilorLuxottica SA beat revenue expectations in the second quarter it sold three times as many Meta Ray Bans as it expected to. (Antonio Vanuzzo and Flavia Rotondi / Bloomberg)
- Alibaba showcased its Quark AI glasses to the public for the first time, marking its entrance in the competitive smart glasses market. (Ann Cao / South China Morning Post)
- Anthropic is reportedly in early talks to more than double its valuation to more than $150 billion in a funding round that’s expected to raise at least $3 billion. (George Hammond, Ivan Levingston, James Fontanella-Khan and Chloe Cornish / Financial Times)
- Microsoft is launching a new Copilot virtual character that can interact with users in real-time. (Tom Warren / The Verge)
- Microsoft’s controversial Recall AI feature has met with resistance with developers, and is being blocked by AdGuard and the Brave browser. (Tom Warren / The Verge)
- Microsoft is testing a new Copilot Mode in its Edge browser that will let Copilot search across all open tabs and handle tasks. (Tom Warren / The Verge)
- The Johns Hopkins University will license its books to train LLMs, it announced, and authors have until the end of August to opt out of the licensing agreement. (Ellie Wolfe / Baltimore Banner)
- A new demo shows a potential unsettling future for AI in video games as characters realize their existence is simulated, raising concerns about the ethics behind sentient characters. (Zachary Small / New York Times)
- China’s biggest annual AI conference saw startups debuting their robots that can perform an array of tasks, from messily dispensing popcorn and drinks to playing mahjong. (Saritha Rai and Annabelle Droulers / Bloomberg)
- Top universities in China are encouraging their students to use more AI and treating it as a skill to be mastered. (Caiwei Chen / MIT Technology Review)
- Many finance professionals in India are increasingly practicing astrology-based trading, which has evolved into an estimated $7 billion market. (Preeti Soni and Akriti Sharma / Bloomberg)
- People who believe in manifestation are increasingly moving from vision boards to using AI to generate videos depicting the life they want. (Alyson Krueger / New York Times)
- AI chatbots have become useful tools for neurodivergent people as they help clarify tone and context in conversations to communicate more effectively. (Hani Richter / Reuters)
- College graduates are now finding high-paying jobs in training AI. For now! (Emma Haidar / Bloomberg)

Those good posts
For more good posts every day, follow Casey’s Instagram stories.

(Link)

(Link)

(Link)

Talk to us
Send us tips, comments, questions, and more rebuttals: casey@platformer.news. Read our ethics policy here.