Roblox is a problem — but it’s a symptom of something worse

What is the role of tech journalism in a world where CEOs no longer feel shame?

Roblox is a problem — but it’s a symptom of something worse
(Oberon Copeland / Unsplash)

I.

On Friday, the Hard Fork team published our interview with Roblox CEO David Baszucki. In the days since, it has become the most-discussed interview we've done in three years on the show. Listeners who wrote in to us said they were shocked to hear the leader of a platform with 151.5 million monthly users, most of them minors, express frustration and annoyance at being asked about the company's history of failures related to child safety. Journalists described the interview as "bizarre," "unhinged," and a "car crash."

And a case can be made that it was all of those things — even if Baszucki, in the studio afterwards and later on X, insisted to us that he had had a good time. In the moment, though, Baszucki's dismissive attitude toward discussing child safety struck me as something worse: familiar.

Baszucki, after all, is not the first CEO to have insisted to me that a platform's problems are smaller than I am making them out to be. Nor is he the first to blame the platform's enormous scale, or to try to change the subject. (He is the first tech CEO to suggest to me that maybe there should be prediction markets in video games for children, but that's another story.)

What people found noteworthy about our interview, I think, was the fresh evidence that our most successful tech CEOs really do think and talk this way. Given a chance to display empathy for the victims of crimes his platform enabled, or to convey regret about historical safety lapses, or even just to gesture at some sense of responsibility for the hundreds of millions of children who in various ways are depending on him, the CEO throws up his hands and asks: how long are you guys going to be going on about all this stuff?

Roblox is different from other social products in that it explicitly courts users as young as 5. (You are supposed to be at least 13 to use Instagram, TikTok, and other major platforms.) That has always put significant pressure on the company to develop serious safety features. The company says it spends hundreds of millions of dollars a year on safety, and that 10 percent of its employees work on trust and safety issues. And trust and safety workers I know tell me that they respect Roblox's safety teams.

At the same time, this is a platform launched in 2006 where, for most of its history, adults could freely approach and message any minor unless their parents had dug into the app settings. Roblox did not verify users' ages, letting any child identify as 13 or older to bypass content restrictions. Filters intended to prevent inappropriate chat or the exchange of personal information were easily bypassed by slightly changing the spelling of words. Parental controls could be circumvented simply by a child creating a new account and declaring that they were at least 13.

Last year the company introduced new restrictions on chat. And this year, the company said it would deploy its own age estimation technology to determine users' ages and restrict the content available to them accordingly. This rollout was the main reason we had sought to interview Baszucki in the first place — something we had communicated to his team.

Which only made it stranger when Baszucki expressed surprise at our line of inquiry and threw his PR team under the bus. ("If our PR people said, “Let’s talk about age-gating for an hour,' I’m up for it, but I love your pod. I thought I came here to talk about everything,'" he said.)

Since 2018, at least two dozen people in the United States have been arrested and accused of abducting or abusing victims they met on Roblox, according to a 2024 investigation by Bloomberg. Attorneys general in Texas, Kentucky, and Louisiana have filed lawsuits against Roblox alleging that the platform facilitates child exploitation and grooming. More than 35 families have filed lawsuits against the company over child predation.

As recently as this month, a reporter for the Guardian created an account presenting herself as a child and found that in Roblox she could wander user-created strip clubs, casinos, and horror games. In one "hangout" game, in which she identified as a 13-year-old, another avatar sexually assaulted her by thrusting his hips into her avatar's face as she begged him to leave her alone.

It's true that any platform that lets strangers communicate will lead to real-world harm. I believe that millions of children use Roblox daily without incident. And we would not want to shut down the entire internet to prevent a single bad thing from ever happening.

But there is much a leader can do with the knowledge that his platform will inevitably lead to harm, should he wish.

Understanding how attractive Roblox would be to predators, the company long ago could have blocked unrestricted contact between adults and minors. It could have adopted age verification before a wave of state legislation signaled that it would soon become mandatory anyway. It could have made it harder for children under 13 to create new accounts, and require them to get parental consent in a way it could verify.

But doing so would require Roblox to focus on outcomes for children, at the likely expense of growth. And so here we are.

II.

Galling? Yes. But like I said: it's also familiar.

Over and over again, we have seen leaders in Baszucki's position choose growth over guardrails. Safety features come out years after the need for them is identified, if at all. Internal critics are sidelined, laid off, or managed out. And when journalists ask, politely but insistently, why so many of their users are suffering, executives laugh and tell us that we're the crazy ones.

Look at OpenAI, where the company is reckoning with the fact that making its models less sycophantic has been worse for user engagement — and is building new features to turn the engagement dial back up.

Look at TikTok, which has answered concerns that short-form video is worsening academic performance for children with new "digital well-being features" that include an affirmation journal, a "background sound generator aimed at improving the mental health of its users," and "new badges to reward people who use the platform within limits, especially teens." Answering concerns that teens are using the app too much with more reasons to use the app.

Or look at Meta, where new court filings from over the weekend allege ... a truly staggering number of things. To name a few: the company "stalled internal efforts to prevent child predators from contacting minors for years due to growth concerns," according to Jeff Horwitz in Reuters; "recognized that optimizing its products to increase teen engagement resulted in serving them more harmful content, but did so anyway"; and gave users 17 attempts to traffic people for sex before banning their accounts. (Meta denies the allegations, which are drawn from internal documents that have not been made public; Meta has also objected to unsealing the documents.)

Lawsuits will always contain the most salacious allegations lawyers can find, of course. But what struck me about these latest filings is not the lawyers' predictably self-serving framing but rather the quotes from Meta's own employees.

When the company declined to publish internal research from 2019 which showed that no longer looking at Facebook and Instagram improved users' mental health, one employee said: "If the results are bad and we don’t publish and they leak ... is it going to look like tobacco companies doing research and knowing cigs were bad and then keeping that info to themselves?”

When Meta researchers found that by 2018, approximately 40 percent of children ages 9 to 12 were daily Instagram users — despite the fact that you are supposed to be 13 to join — some employees bristled at what they perceived as tacit encouragement from executives to accelerate growth efforts among children.

"Oh good, we’re going after <13 year olds now?” one wrote, as cited in Time's account of the brief. “Zuck has been talking about that for a while...targeting 11 year olds feels like tobacco companies a couple decades ago (and today). Like we’re seriously saying ‘we have to hook them young’ here.”

When Meta studied the potential of its products to be addictive in 2018, it found that 55 percent of 20,000 surveyed users showed at least some signs of "problematic use." When it published that research the following year, though, it redefined "problematic use" to include only the most severe cases — 3.1 percent of users.

 “Because our product exploits weaknesses in the human psychology to promote product engagement and time spent,” a user experience researcher wrote, the company should “alert people to the effect that the product has on their brain.”

You will not be surprised to learn that the company did not alert people to the issue. 

III.

As usual, the rank-and-file employees are doing their job. Over and over again, though, their boss' boss tells them to stop.

The thing is, platforms' strategy of delay, deny and deflect mostly works.

Americans have short attention spans — and lots to worry about. The tech backlash that kicked off in 2017 inspired platforms to make meaningful and effective investments in content moderation, cybersecurity, platform integrity, and other teams that worked to protect their user bases. Imperfect as these efforts were, they bolstered my sense that tech platforms were susceptible to pressure from the public, from lawmakers and from journalists. They acted slowly, and incompletely, but at least they acted.

Fast forward to today and the bargain no longer holds. Platforms do whatever the president of the United States tells them to do, and very little else. Shame, that once-great regulator of social norms and executive behavior, has all but disappeared from public life. In its place is denial, defiance, and the noxious vice signaling of the investor class.

I'm still reckoning with what it means to do journalism in a world where the truth can barely hold anyone's attention — much less hold a platform accountable, in any real sense of that word. I'm rethinking how to cover tech policy at a time when it is being made by whim. I'm noticing the degree to which platforms wish to be judged only by their stated intentions, and almost never on the outcomes of anyone who uses them.

In the meantime the platforms hurtle onward, pitching ever-more fantastical visions of the future while seeming barely interested in stewarding the present.

For the moment, I'm grateful that a car-crash interview drew attention to one CEO's exasperation with being asked about that. But the real problem isn't that David Baszucki talks this way. It's that so many of his peers do, too.

Sponsored

Unknown number calling? It’s not random…

The BBC caught scam call center workers on hidden cameras as they laughed at the people they were tricking.

One worker bragged about making $250k from victims. The disturbing truth?
Scammers don’t pick phone numbers at random. They buy your data from brokers.

Once your data is out there, it’s not just calls. It’s phishing, impersonation, and identity theft.

That’s why we recommend Incogni: They delete your info from the web, monitor and follow up automatically, and continue to erase data as new risks appear. 

Black Friday deal: Try Incogni here and get 55% off your subscription with code PLATFORMER

Following

Trump backs down on AI preemption

What happened: Facing criticism from both parties, the Trump administration backed down from issuing an executive order that would have effectively placed a moratorium on state AI regulations, Reuters reported.

The order would have fought state regulations by withholding federal funding and establishing an “AI Litigation Task Force” to “challenge State AI laws.”

Why we’re following: Last week we covered the draft executive order and how Trump’s attempts to squash state AI regulation have drawn bipartisan backlash — and made Republicans increasingly more sympathetic to the views of AI safety advocates.

It's always hard to guess when Trump's instinct to do as he pleases will be thwarted by political opposition. In this case, though, the revived moratorium had little support outside the David Sacks wing of the party. And so — for now, anyway — it fell apart.

What people are saying: State lawmakers are fighting the moratorium proposal Trump made to Congress. Today, a letter signed by 280 state lawmakers urged Congress to “reject any provision that overrides state and local AI legislation.”

A moratorium would threaten existing laws that “strengthen consumer transparency, guide responsible government procurement, protect patients, and support artists and creators,” the letter said.

On the other side of the debate, the tech-funded industry PAC Leading the Future announced a $10 million campaign to push Congress to pass national AI regulations that would supersede state law.

—Ella Markianos


X’s "About This Account" meltdown

What happened: On Friday, X debuted its About This Account feature globally in a rollout that descended into chaos over the feature’s accidental uncovering of foreign actors behind popular right-wing accounts that actively share news on US politics. 

X users can now see the date an account joined the platform, how many times it has changed its username, and most importantly, the country or region it’s based in. The move, according to X head of product Nikita Bier, “is an important first step to securing the integrity of the global town square.”

But the feature has had an unintended consequence: it revealed that big pro-Trump accounts like @MAGANationX, a right-wing user with nearly 400,000 followers that regularly shares news about US politics, aren't actually based in the US. MAGANationX, for example, is based in Eastern Europe, according to X. 

Other popular right-wing accounts — that use names from the Trump family — like @IvankaNews_ (1 million followers before it was suspended), @BarronTNews (nearly 600,000 followers), and @TrumpKaiNews (more than 11,000 followers), appear to be based in Nigeria, Eastern Europe, and Macedonia respectively. 

The data could be skewed by travel, VPNs, or old IP addresses, and some have complained their location is inaccurate. Bier said the rollout has “a few rough edges” that will be resolved by Tuesday. 

Why we’re following: One of Elon Musk’s promises during the takeover of Twitter was to purge the platform of inauthentic accounts. But several studies have shown that suspected inauthentic activity has remained at about the same levels. X has long struggled with troll farms spreading misinformation, boosted by its tendency to monetarily reward engagement. 

There's also an irony in the fact that revealing the origins of ragebait-posting political accounts like these was once the subject of groundbreaking research by the Stanford Internet Observatory and other academic researchers. But the effort outraged Republicans, which then sued them over their contacts with the government about information operations like these and largely succeeded in stopping the work.

What people are saying: Accusations of foreign actors spreading fake news flew on both sides of the aisle. When the feature appeared to be pulled for a short period of time, Republican Gov. Ron DeSantis of Florida said “X needs to reinstate county-of-origin — it helps expose the grift.” 

In a post that garnered 3.2 million views, @greg16676935420 attached a screenshot of @AmericanGuyX’s profile, which shows the account’s based in India: “BREAKING: American guy is not actually an American guy.”

“When an American billionaire offers money to people from relatively poor countries for riling up and radicalising Americans, it's not surprising that they'll take up the offer,” @ChrisO_wiki wrote in a post that garnered nearly 700,000 views. 

In perhaps the most devastating consequence of the feature, @veespo_444s said they “spent 2 years acting mysterious over what country I live in just for Elon to fuck it all up with a single update” in a post that has 4.3 million views and 90,000 likes. 

—Lindsey Choo

Side Quests

How President Trump amplifies right-wing trolls and AI memes. The crypto crash has taken about $1 billion out of the Trump family fortune.

Gamers are using Fortnite and GTA to prepare for ICE raids. How Democrats are building their online strategy to catch up with Republicans.

In the last month, Elon Musk has posted more about politics than about his companies on X

Hundreds of English-language websites link to articles from a pro-Kremlin disinformation network and are being used to "groom" AI chatbots into spreading Russian propaganda, a study found. 

Sam Altman and Jony Ive said they’re now prototyping their hardware device, but it remains two years away. An in-depth look at OpenAI's mental health crisis after GPT-4o details how the company changed ChatGPT after reports of harmful interactions. OpenAI safety research leader Andrea Vallone, who led ChatGPT’s responses to mental health crises, is reportedly leaving. A review of ChatGPT’s new personal shopping agent.

Anthropic unveiled Claude Opus 4.5, which it said is the best model for software engineering. Other highlights from the launch: it outscored human engineering candidates on a take-home exam, is cheaper than Opus 4.1, can keep a chat going indefinitely via ongoing summarization of past chats, and is harder to trick with prompt injection.  

In other research, AI models can unintentionally develop misaligned behaviors after learning to cheat, Anthropic said. (This won an approving tweet from Ilya Sutskever, who hadn't posted about AI on X in more than a year.)

Why Meta’s $27 billion data center and its debt won’t be on its balance sheet. Meta is venturing into electricity trading to speed up its power plant construction. Facebook Groups now has a nickname feature for anonymous posting.

A judge is set to decide on remedies for Google’s adtech monopoly next year. Italy closed its probe into Google over unfair practices that used personal data. Google stock closed at a record high last week after the successful launch of Gemini 3. AI Mode now has ads. 

Something for the AI skeptics: Google must double its serving capacity every six months to meet current demand for AI services, Google Cloud VP Amin Vahdat said.

AI demand has strained the memory chip supply chain, chipmakers said.

Amazon has more than 900 data centers — more than previously known — in more than 50 countries. Its Autonomous Threat Analysis system uses specialized AI agents for debugging. AWS said it would invest $50 billion in AI capabilities for federal agencies.

Twitch was added to Australia's list of platforms banned for under-16s. Pinterest was spared. 

Grindr said it ended talks on a $3.5 billion take-private deal, citing uncertainty over financing.

Interviews with AI quality raters who are telling their friends and family not to use the tech. How AI is threatening the fundamental method of online survey research by evading bot detection techniques. Insurers are looking to limit their liability on claims related to AI. Another look at how America’s economy is now deeply tied to AI stocks and their performance.

Scientists built an AI model that can flag human genetic mutations likely to cause disease.

Those good posts

For more good posts every day, follow Casey’s Instagram stories.

(Link)

(Link)

(Link)

Talk to us

Send us tips, comments, questions, and your questions for the tech CEOs: casey@platformer.news. Read our ethics policy here.