Is anyone left to defend trust and safety?

It’s under assault this year from users, politicians, and their own executives — but the industry has responded with silence

Is anyone left to defend trust and safety?
TrustCon 2025. (TSPA)

In 2023, I asked whether we had reached peak trust and safety. Noting the huge number of layoffs at tech companies that year, and the growing political pressure on platforms to stop removing misinformation related to Covid and vaccines, I suggested that the period between Donald Trump’s first election as president and the start of Biden’s presidency might come to be seen in retrospect as a high-water mark for the tech industry’s investment in content moderation.

Two years later, the answer to my question seems obvious. Rolling layoffs across Meta, Google, Microsoft, and other platforms have made careers in trust and safety even more precarious, according to an academic paper published this year. (Its title: The End of Trust and Safety?) And as those workforces have diminished, platforms have also rolled back policies that once sought to protect users against hate speech, propaganda, and even weapons powered by artificial intelligence — and some platform leaders, such as X’s Elon Musk, openly brag about it.

For years, Platformer has been, among other things, a chronicle of the effort tech companies made to improve the integrity of their platforms after the backlash against them that began in 2016. This year, though, that march abruptly reversed — so much so that Meta created new carveouts in its speech policies to protect hate speech against women, immigrants, and transgender people, in the hopes that doing so would curry favor with the newly re-elected President Trump. (It did.)

For more than half a year now, I’ve waited for a single leader in the field of trust and safety to stand up and say something about this retrenchment. To quit their job, to write an op-ed, to call a reporter and talk on the record. To say: this work was important — it saved lives — and it’s wrong that my company isn’t doing it anymore. 

I’m still waiting. Whatever trust and safety leaders have been doing this year, as their industry has been under siege like never before, they have been very, very quiet about it. 

This week, I attended TrustCon, the fourth annual conference of the Trust and Safety Professionals Association, in the hope of getting some answers. In her opening speech, the group’s executive director, Charlotte Willner, spoke — in broad terms — to the collective anxieties of those in the room.

“I … can't ignore that so many things are not getting better,” said Willner, who previously built the first safety operations team at Facebook. “So often in past years, perhaps we felt alone in our work — but at least there was some kind of backstop. Surely even if some individual companies or business leaders made questionable choices — like making Grok AI — for kids! — the broader system would still hold. Surely our democratic institutions would prevent the worst imagined outcomes. Surely there were guardrails we could count on. Instead — we've watched norms and institutions we thought were solid prove to be more fragile than we could ever have imagined.”

Willner didn’t name names — and didn’t have to, since the relevant parties were all well known to those in the room. Still, as I scanned the agenda for the conference, I hoped to find some reflection on where the industry might go from here — or, better yet, how it might organize to demand higher standards from their employers. 

Instead, though, I found panels speaking to workers’ more mundane, operational concerns: what to do about the proliferation of deepfake “nudification” apps; how to verify users’ ages to comply with new regulations; how to identify content generated by artificial intelligence; and how to use AI to automate content moderation. And in between, attendees took in panels about the necessity of protecting their own well-being and mental health resources for moderators. (One afternoon featured a sound bath.)

Trust and safety teams have always been more focused on a company’s day-to-day operations than on public advocacy. They tend to be quiet for lots of reasons, workers told me at TrustCon this week: The threats they face for doing their jobs make them fear for their safety. They had low expectations for their companies to begin with. They fear for their jobs if they speak out.

As Willner told me today, “The entire point of this industry is that nobody is supposed to see you.”

Workers also told me that, for all the high-profile policy changes at big platforms this year, their jobs remain mostly the same. New laws and regulations in the United States and abroad have forced their companies to belatedly pay attention to child safety, and trust and safety workers are now dutifully implementing age assurance platforms, building AI systems to detect grooming, and working on other priorities of the moment.

Moreover, they said, trust and safety has always been subject to political pressures, particularly abroad. Platforms have long experience in navigating authoritarian regimes where they have been forced to remove political speech as a condition for remaining in a country; I’m told some foreign trust and safety workers spent this TrustCon saying to their American counterparts: welcome to the party.

Still, it’s a far cry from the platforms 10 years ago, which took pride in building legal teams and a policy apparatus designed to withstand pressure from authoritarians. Meta, Microsoft, and other companies made public commitments to honor the United Nations Guiding Principles on Business and Human Rights, which requires them to address the violations they contribute to. (Such as, for example, hosting and amplifying speech targeting individuals on the basis of their gender, sexuality, or immigration status.) 

Google published “responsible AI” principles in 2018 that pledged not to use AI for “technologies whose purpose contravenes widely accepted principles of international law and human rights." Meta’s creation of the Oversight Board, which was intended to protect freedom of expression, was perhaps the grandest expression of the human rights era in trust and safety. (And yes, it was also a PR move. But not only a PR move.)

The sustained political pressure campaign from conservatives in the United States played an enormous role in bringing this era to an end. Ironically, so did the passage of the Digital Services Act in the European Union in 2022. 

The DSA seeks to enshrine human rights principles into law by requiring platforms to protect women and children, remove misinformation, and consent to risk audits. While the law has clear benefits, it also accelerated the transformation of trust and safety from an idealistic effort to promote human rights into a dull compliance regime — a system of boxes for the lawyers to check. Attention shifted toward compliance in the EU — and in the United States, where the compliance burden is much lower, platforms could simply relax.  

There are other reasons trust and safety is in retreat, of course. Elon Musk’s decision to blow up his content moderation operation at X inspired Mark Zuckerberg at Meta to follow — albeit in a smaller, more cautious way. Rep. Jim Jordan’s ongoing hearings in the House of Representatives, in which he excoriates social media companies for engaging with the previous administration on content removals, have chilled speech both at platforms and in academia

And to be clear: real people are suffering as a result. To name just one example: NBC reported last month that X has been flooded with hundreds of posts per hour advertising the sale of child sexual abuse material; the move follows X cutting ties with the vendor that it previously used to detect CSAM. (X told NBC it would be using its own technology to detect CSAM, but offered no details; NBC’s report suggests that it isn’t working very well.)

In the wake of all that, from the trust and safety people who still work at X, there’s that old familiar sound: silence. 

Workers at TrustCon told me that they see colleagues resisting the decline in various ways: by “quiet quitting,” or actually quitting, or finding subtle ways to thwart efforts to marginalize trust and safety teams. Others have taken medical leave for burnout — often a precursor to resigning.

To the outside world, though, the silence in trust and safety this year looks a lot like unilateral surrender.

My hope is that, in the coming months, the idealists in this profession collectively find their voice. (And caseynewton.01 is my Signal.) My fear, though, is that trust and safety will increasingly come to resemble human resources: another corporate department that presents itself as here for the benefit of people, when in reality its primary function is mostly to provide legal protections to the company. 

In any case, I won’t accept that trust and safety can’t do better than that — because there was a time when it once did.

On the podcast this week: Kevin and I discuss Trump's AI Action Plan and whether any tech lab will mount even minimal opposition to his plan to purge chatbots of liberalism. Then, we field some critical questions about our coverage of AI from writers and thinkers, including John Herrman, Alison Gopnik, Max Read, and Brian Merchant. (And thanks to Time for naming Hard Fork one of the 100 best podcasts of all time!)

Apple | Spotify | Stitcher | Amazon | Google | YouTube

Sponsored

Fly.io lets you spin up hardware-virtualized containers (Fly Machines) that boot in milliseconds, run any Docker image, and scale to zero automatically when idle. Whether your workloads are driven by humans or autonomous AI agents, Fly Machines provide infrastructure that's built to handle it:

  • Instant Boot Times: Machines start in milliseconds, ideal for dynamic and unpredictable demands.
  • Zero-Cost When Idle: Automatically scale down when not in use, so you're only billed for active usage.
  • Persistent Storage: Dedicated storage for every user or agent with Fly Volumes, Fly Managed Postgres, and S3-compatible storage from Tigris Data.
  • Dynamic Routing: Seamlessly route each user (or robot) to their own sandbox with Fly Proxy and fly-replay.

If your infrastructure can't handle today's dynamic and automated workloads, it's time for an upgrade.

Build infrastructure ready for both humans and robots. Try Fly.io.

Trump's AI Action Plan

Governing

Industry

Those good posts

For more good posts every day, follow Casey’s Instagram stories.

(Link)

(Link)

(Link)

(Link)

Talk to us

Send us tips, comments, questions, and defenses of trust and safety: casey@platformer.news. Read our ethics policy here.