Are Republicans changing their minds about AI safety?

A new FTC inquiry into chatbots and children shows that some within the Trump Administration may be reconsidering their anti-regulation approach

Are Republicans changing their minds about AI safety?
(Ian Hutchinson / Unsplash)

Like so many others, I'm sickened by the ongoing political violence in the United States and worried that it will escalate. I'm writing about another subject today, but would recommend three pieces to understand Charlie Kirk's murder, social media, and the present moment: Ryan Broderick's chilling "The logical endpoint of 21st-century America," Noah Smith's Civil war is for idiots and losers, and — from last month — Nathan Witkin's The Case Against Social Media is Stronger Than You Think — which makes an empirical case for an "elite radicalization theory" that explains how influencers and tech platforms work together to amplify hatred and division and reshape our politics along the way.


Inside the current US regulatory apparatus, there are two wolves. One seeks to break apart tech companies. The other seeks to break apart those who would break apart tech companies.

And so on one hand you have the government's antitrust lawsuit against Meta, which seeks to force the company to spin out Instagram and WhatsApp. On the other you have Mark Zuckerberg sitting at the right hand of President Trump at dinner, and the president threatening to impose tariffs against Europe for levying a tax on digital services like Facebook.

On one hand you have the government declaring Google an illegal monopoly in search and ads. On the other you have Trump pledging to fight a European Union fine against Google for anticompetitive practices in ads, calling it "unfair" and "discriminatory."

As the Financial Times reported this week, this dichotomy reflects a real divide within the Republican Party. The elites that advise Trump on business issues favor little to no regulation, and Trump himself has been won over by tech platforms' campaign of flattery and bribery since he won re-election. But most of the average Americans who make up MAGA's base are deeply skeptical of Big Tech and find Trump's cheerleading for their efforts distasteful, reporter Joe Miller writes.

And now this divide has come to artificial intelligence. On one hand, Trump's top advisers advocate for an all-gas, no-brakes approach to AI development, sneering at the very concept of AI safety. And on the other we now have this, from Leah Nylen at Bloomberg:

The Federal Trade Commission ordered Alphabet Inc.’s Google, OpenAI Inc., Meta Platforms Inc. and four other makers of artificial intelligence chatbots to turn over information about the impacts of their technologies on kids.

The antitrust and consumer protection agency said Thursday that it sent the orders to gather information to study how firms measure, test and monitor their chatbots and what steps they have taken to limit their use by kids and teens. The companies also include Meta’s Instagram, Snap Inc., Elon Musk’s xAI and Character Technologies Inc., the developer of Character.AI.

The move comes amid heightened scrutiny of AI companions in the wake of reports about AI psychosis, Meta's child-romancing chatbots, and a ChatGPT-assisted suicide.

In a statement, FTC chair Andrew Ferguson attempted to straddle the administration's conflicting positions on the technology.

“As A.I. technologies evolve, it is important to consider the effects chatbots can have on children, while also ensuring that the United States maintains its role as a global leader in this new and exciting industry,” he said.

Left unspoken for now is what will happen if it turns out that the goals of protecting children and developing powerful AI are in tension, and require trade-offs.

The AI Action Plan released by the administration this summer encourages agencies to cut "onerous" rules; punish states that pass anti-AI regulations by withholding federal funding; and create "regulatory sandboxes" in which AI companies can be exempt from certain rules in order to let them build faster.

Just this week, Sen. Ted Cruz (R-TX) introduced a bill that would formalize the concept of these sandboxes, arguing that the move would help the United States compete with China.

What sorts of regulations might labs want to be exempt from? At a hearing this week, administration officials reportedly complained about a recently passed Colorado law designed to (gasp) "prevent AI discrimination in employment, housing, banking and other consequential consumer decisions," Reuters reported.

"These types of very anti-innovation regulations are a huge problem for our industry," said Michael Kratsios, director of the Office of Science and Technology Policy, in a revealing quote at the hearing. (He seems to consider himself part of the AI industry at the same time he is supposed to be regulating it?)

The bad news for Kratsios is that protecting children may require the exact sort of "anti-innovation regulations" that he finds so annoying. Tech companies' attempts at self-regulation on child safety issues have been half-hearted at best, and the bad outcomes are stacking up faster than their efforts to prevent them.

That's why, during a particularly bleak week in America, I'm heartened that the FTC is paying attention to this. Yes, the risk for think-of-the-children grandstanding here is high, and there's no guarantee that meaningful change will come out of what for the moment is merely an inquiry.

At the same time, some quarters of the administration appear this close to grasping what has been true all along: that creating a powerful, free, sycophantic digital companion and putting it in the hands of every child in America is not actually a great way to "beat China." It's reckless and has already ended in tragedy.

I can't guess how the current conflict between pro-business and anti-tech interests in the Republican Party will resolve. But for the moment there is at least hope that mounting AI tragedies will cause at least some right-wing politicians to begin taking AI safely seriously.

For the moment, relative to how intelligent they might someday be, today's chatbots are relatively obtuse. And even still, the harms have already materialized. The question for regulators isn't how to put them into a "sandbox" to help them grow faster. Rather, it's this: if the AIs we have today can already assist in your child's suicide, what other harms might they soon cause?


Elsewhere in anti-innovation regulations are a huge problem for our industry: The California State Assembly passed a bill that would require AI chatbot companies to implement safeguards aimed at protecting minors and vulnerable users, sending the bill to the state Senate for a final vote. (Rebecca Bellan / TechCrunch

On the podcast this week: Kevin and I take in Apple's iPhone event and wonder if the company is losing the juice. Then, AI doomer in chief Eliezer Yudkowsky stops by to discuss his new book with Nate Soares, If Anyone Builds It, Everyone Dies.

Apple | Spotify | Stitcher | Amazon | Google | YouTube

Sponsored

AI Models Spread False Claims on News Topics 35% of the Time

After a year of auditing the leading AI chatbots, NewsGuard has enough company-specific data to draw conclusions about where progress has been made, and where the chatbots still fall short. In the past year, the rate of false information nearly doubled, with clear differences in performance between AI models.

Key finding: On average, the AI models spread false claims on topics in the news 35% of the time. This doubles the fail rate of the AI models since last year.

For the first time, this anniversary audit names and ranks each chatbot and its score.

Download the report to see where your favorite chatbot lands.

Governing

Industry

Those good posts

For more good posts every day, follow Casey’s Instagram stories.

(Link)

(Link)

(Link)

Talk to us

Send us tips, comments, questions, and platforms' responses to the FTC: casey@platformer.news. Read our ethics policy here.