Are Republicans changing their minds about AI safety?
A new FTC inquiry into chatbots and children shows that some within the Trump Administration may be reconsidering their anti-regulation approach

Like so many others, I'm sickened by the ongoing political violence in the United States and worried that it will escalate. I'm writing about another subject today, but would recommend three pieces to understand Charlie Kirk's murder, social media, and the present moment: Ryan Broderick's chilling "The logical endpoint of 21st-century America," Noah Smith's Civil war is for idiots and losers, and — from last month — Nathan Witkin's The Case Against Social Media is Stronger Than You Think — which makes an empirical case for an "elite radicalization theory" that explains how influencers and tech platforms work together to amplify hatred and division and reshape our politics along the way.
Inside the current US regulatory apparatus, there are two wolves. One seeks to break apart tech companies. The other seeks to break apart those who would break apart tech companies.
And so on one hand you have the government's antitrust lawsuit against Meta, which seeks to force the company to spin out Instagram and WhatsApp. On the other you have Mark Zuckerberg sitting at the right hand of President Trump at dinner, and the president threatening to impose tariffs against Europe for levying a tax on digital services like Facebook.
On one hand you have the government declaring Google an illegal monopoly in search and ads. On the other you have Trump pledging to fight a European Union fine against Google for anticompetitive practices in ads, calling it "unfair" and "discriminatory."
As the Financial Times reported this week, this dichotomy reflects a real divide within the Republican Party. The elites that advise Trump on business issues favor little to no regulation, and Trump himself has been won over by tech platforms' campaign of flattery and bribery since he won re-election. But most of the average Americans who make up MAGA's base are deeply skeptical of Big Tech and find Trump's cheerleading for their efforts distasteful, reporter Joe Miller writes.
And now this divide has come to artificial intelligence. On one hand, Trump's top advisers advocate for an all-gas, no-brakes approach to AI development, sneering at the very concept of AI safety. And on the other we now have this, from Leah Nylen at Bloomberg:
The Federal Trade Commission ordered Alphabet Inc.’s Google, OpenAI Inc., Meta Platforms Inc. and four other makers of artificial intelligence chatbots to turn over information about the impacts of their technologies on kids.
The antitrust and consumer protection agency said Thursday that it sent the orders to gather information to study how firms measure, test and monitor their chatbots and what steps they have taken to limit their use by kids and teens. The companies also include Meta’s Instagram, Snap Inc., Elon Musk’s xAI and Character Technologies Inc., the developer of Character.AI.
The move comes amid heightened scrutiny of AI companions in the wake of reports about AI psychosis, Meta's child-romancing chatbots, and a ChatGPT-assisted suicide.
In a statement, FTC chair Andrew Ferguson attempted to straddle the administration's conflicting positions on the technology.
“As A.I. technologies evolve, it is important to consider the effects chatbots can have on children, while also ensuring that the United States maintains its role as a global leader in this new and exciting industry,” he said.
Left unspoken for now is what will happen if it turns out that the goals of protecting children and developing powerful AI are in tension, and require trade-offs.
The AI Action Plan released by the administration this summer encourages agencies to cut "onerous" rules; punish states that pass anti-AI regulations by withholding federal funding; and create "regulatory sandboxes" in which AI companies can be exempt from certain rules in order to let them build faster.
Just this week, Sen. Ted Cruz (R-TX) introduced a bill that would formalize the concept of these sandboxes, arguing that the move would help the United States compete with China.
What sorts of regulations might labs want to be exempt from? At a hearing this week, administration officials reportedly complained about a recently passed Colorado law designed to (gasp) "prevent AI discrimination in employment, housing, banking and other consequential consumer decisions," Reuters reported.
"These types of very anti-innovation regulations are a huge problem for our industry," said Michael Kratsios, director of the Office of Science and Technology Policy, in a revealing quote at the hearing. (He seems to consider himself part of the AI industry at the same time he is supposed to be regulating it?)
The bad news for Kratsios is that protecting children may require the exact sort of "anti-innovation regulations" that he finds so annoying. Tech companies' attempts at self-regulation on child safety issues have been half-hearted at best, and the bad outcomes are stacking up faster than their efforts to prevent them.
That's why, during a particularly bleak week in America, I'm heartened that the FTC is paying attention to this. Yes, the risk for think-of-the-children grandstanding here is high, and there's no guarantee that meaningful change will come out of what for the moment is merely an inquiry.
At the same time, some quarters of the administration appear this close to grasping what has been true all along: that creating a powerful, free, sycophantic digital companion and putting it in the hands of every child in America is not actually a great way to "beat China." It's reckless and has already ended in tragedy.
I can't guess how the current conflict between pro-business and anti-tech interests in the Republican Party will resolve. But for the moment there is at least hope that mounting AI tragedies will cause at least some right-wing politicians to begin taking AI safely seriously.
For the moment, relative to how intelligent they might someday be, today's chatbots are relatively obtuse. And even still, the harms have already materialized. The question for regulators isn't how to put them into a "sandbox" to help them grow faster. Rather, it's this: if the AIs we have today can already assist in your child's suicide, what other harms might they soon cause?
Elsewhere in anti-innovation regulations are a huge problem for our industry: The California State Assembly passed a bill that would require AI chatbot companies to implement safeguards aimed at protecting minors and vulnerable users, sending the bill to the state Senate for a final vote. (Rebecca Bellan / TechCrunch)


On the podcast this week: Kevin and I take in Apple's iPhone event and wonder if the company is losing the juice. Then, AI doomer in chief Eliezer Yudkowsky stops by to discuss his new book with Nate Soares, If Anyone Builds It, Everyone Dies.
Apple | Spotify | Stitcher | Amazon | Google | YouTube

Sponsored
AI Models Spread False Claims on News Topics 35% of the Time

After a year of auditing the leading AI chatbots, NewsGuard has enough company-specific data to draw conclusions about where progress has been made, and where the chatbots still fall short. In the past year, the rate of false information nearly doubled, with clear differences in performance between AI models.
Key finding: On average, the AI models spread false claims on topics in the news 35% of the time. This doubles the fail rate of the AI models since last year.
For the first time, this anniversary audit names and ranks each chatbot and its score.
Download the report to see where your favorite chatbot lands.

Governing
- Graphic footage of the fatal shooting of right-wing activist Charlie Kirk spread rapidly and amassed millions of views on social media platforms like X, Instagram and Threads within hours. (Sheera Frenkel and Kate Conger / New York Times)
- A look at how Bluesky, Meta, Reddit, YouTube and Discord are choosing to moderate posts about the shooting. (Jay Peters / The Verge)
- The majority of Bluesky’s users did not actually celebrate Kirk's death, despite right-wing influencers’ claims that it was happening. (Alex Kirshner / Slate)
- Far-right influencers and violent extremists are targeting and posting identifying details about people they view as celebrating the killing, leading to death threats against them. (David Gilbert / Wired)
- Misinformation about Kirk’s death started up just minutes after the shooting was confirmed. (Mia Sato / The Verge)
- Apple gave a subcontractor updated guidelines following President Trump’s inauguration on how its AI model should talk about DEI and a number of other politically sensitive topics. (Océane Herrero / Politico)
- Digital content creators are eligible for the new “no tax on tips” rule, the Treasury Department said. (Alex Weprin / Hollywood Reporter)
- The Department of Justice sued Uber over claims that the company discriminates against people with physical disabilities. (Natalie Lung / Bloomberg)
- Encyclopedia Britannica and Merriam-Webster sued Perplexity, accusing the AI search engine startup of unlawfully copying their material and redirecting their web traffic to its AI summaries. (Blake Brittain / Reuters)
- Perplexity has reportedly finalized its $200 million funding round at a $20 billion valuation. Will that be enough to settle all the lawsuits against the company? (Miles Kruppa, Natasha Mascarenhas and Katie Roof / The Information)
- More than 15 million YouTube videos have been downloaded by tech companies to train AI products without creators’ consent, this investigation found. (Alex Reisner / The Atlantic)
- Bluesky is now verifying the ages of users in South Dakota and Wyoming as required by law, saying those states have a better approach than Mississippi, where Bluesky will not make itself available. (Sarah Perez / TechCrunch)
- Business Insider, Wired and other news outlets retracted a number of articles by “Margaux Blanchard,” which appears to be just one part of a broader scheme to publish AI-generated articles in publications. (Scott Nover and Aaron Schaffer / Washington Post)
- The number of new US-based spyware investors rose sharply in 2024 with 20 new investors identified, largely outpacing other major investing countries like Israel, Italy and the UK, according to a new report. (Vas Panagiotopoulos / Wired)
- Meta and TikTok won a legal challenge in the EU that will require regulators to reformulate the way a supervisory fee is calculated. (Foo Yun Chee / Reuters)
- Apple’s live translation feature on AirPods will not immediately be available for EU users due to concerns about the bloc's AI regulations. (Tim Hardwick / MacRumors)
- Apple notified a number of individuals that their devices were targeted in a new spyware campaign, the French government said. (Zack Whittaker / TechCrunch)
- Albania appointed the world’s first ever virtual minister powered by AI, named Diella, who will handle all public procurement. 🙄 (Alice Taylor / Politico)

Industry
- OpenAI reportedly signed a contract with Oracle to buy $300 billion in computing power over roughly five years in one of the largest cloud contracts ever. (Berber Jin / Wall Street Journal)
- OpenAI plans to spend half of its revenue on cloud computing costs, which is higher than any other company of its scale. (Amir Efrati and Sri Muppidi / The Information)
- Oracle has emerged as one of the biggest winners from the AI boom, as its cloud contracts expanded to $455 billion from $138 billion three months ago. (Rafe Rosner-Uddin and Richard Waters / Financial Times)
- The agentic AI era is off to a slow start as state-of-the-art models, including GPT-5, still make fundamental errors, this analysis finds. (Steve Newman / Second Thoughts)
- ByteDance launched its latest AI image generator, Seedream 4.0, which it said surpasses Google Deepmind’s Nano Banana on several benchmarks. (Vincent Chow / South China Morning Post)
- Meta has reportedly signed a multi-year contract to pay more than $100 million to use technology from AI image startup Black Forest Labs. (Kate Clark / Bloomberg)
- Facebook, Instagram and Threads users will now be notified when they’ve interacted with a post that receives a Community Note. (Sarah Perez / TechCrunch)
- Meta’s new elite unit of AI researchers is reportedly causing tension with other employees. (Meghan Bobrowsky, Keach Hagey and Berber Jin / Wall Street Journal)
- Google announced a new AI Plus subscription tier at $19.99 per month in the US, a step up from the free tier but a cheaper alternative to the AI Pro tier. (Abner Li / 9to5Google)
- YouTube’s multi-language audio dubbing feature is rolling out to all creators. (Lauren Forristal / TechCrunch)
- Amazon is developing AR glasses with the aim of rolling it out in late 2026 or early 2027, sources said. (Wayne Ma and Juro Osawa / The Information)
- Spotify is rolling out lossless audio to all Premium subscribers after eight years of rumors. But the rollout will take two months. (Terrence O’Brien / The Verge)
- Reddit is removing the member count metric on subreddit pages in favor of a count of recent active visitors. (Jess Weatherbed / The Verge)
- Reddit launched a set of free tools for publishers to track article performance and get suggestions on which communities to share their stories to. (Ivan Mehta / TechCrunch)
- Companies including Reddit, Yahoo and Medium announced support for Really Simple Licensing, a new licensing standard that lets publishers outline how bots should pay to scrape sites for AI training. (Emma Roth / The Verge)

Those good posts
For more good posts every day, follow Casey’s Instagram stories.

(Link)

(Link)

(Link)

Talk to us
Send us tips, comments, questions, and platforms' responses to the FTC: casey@platformer.news. Read our ethics policy here.