The AI industry really should slow down a little
This year has given us a bounty of innovations. We could use some time to absorb them
What a difference four months can make.
If you had asked in November how I thought AI systems were progressing, I might have shrugged. Sure, by then OpenAI had released DALL-E, and I found myself enthralled with the creative possibilities it presented. On the whole, though, after years watching the big platforms hype up artificial intelligence, few products on the market seemed to live up to the more grandiose visions that have been described for us over the years.
Then OpenAI released ChatGPT, the chatbot that captivated the world with its generative possibilities. Microsoft’s GPT-powered Bing browser, Anthropic’s Claude, and Google’s Bard followed in quick succession. AI-powered tools are quickly working their way into other Microsoft products, and more are coming to Google’s.
At the same time, as we inch closer to a world of ubiquitous synthetic media, some danger signs are appearing. Over the weekend, an image of Pope Francis that showed him in an exquisite white puffer coat went viral — and I was among those who was fooled into believing it was real. The founder of open-source intelligence site Bellingcat was banned from Midjourney after using it to create and distribute some eerily plausible images of Donald Trump getting arrested. (The company has since disabled free trials in an effort to reduce the spread of fakes.)
Synthetic text is rapidly making its way into the workflows of students, copywriters, and anyone else engaged in knowledge work; this week BuzzFeed became the latest publisher to begin experimenting with AI-written posts.
At the same time, tech platforms are cutting members of their AI ethics teams. A large language model created by Meta leaked and was posted to 4chan, and soon someone figured out how to get it running on a laptop.
Elsewhere, OpenAI released plug-ins for GPT-4, allowing the language model to access APIs and interface more directly with the internet, sparking fears that it would create unpredictable new avenues for harm. (I asked OpenAI about that one directly; the company didn’t respond to me.)
It is against the backdrop of this maelstrom that a group of prominent technologists is now asking makers of these tools to slow down. Here’s Cade Metz and Gregory Schmidt at the New York Times:
More than 1,000 technology leaders and researchers, including Elon Musk, have urged artificial intelligence labs to pause development of the most advanced systems, warning in an open letter that A.I. tools present “profound risks to society and humanity.”
A.I. developers are “locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one — not even their creators — can understand, predict or reliably control,” according to the letter, which the nonprofit Future of Life Institute released on Wednesday.
Others who signed the letter include Steve Wozniak, a co-founder of Apple; Andrew Yang, an entrepreneur and a 2020 presidential candidate; and Rachel Bronson, the president of the Bulletin of the Atomic Scientists, which sets the Doomsday Clock.
If nothing else, the letter strikes me as a milestone in the march of existential AI dread toward mainstream awareness. Critics and academics have been warning about the dangers posed by these technologies for years. But as recently as last fall, few people playing around with DALL-E or Midjourney worried about “an out-of-control race to develop and deploy ever more digital minds.” And yet here we are.
There are some worthwhile critiques of the technologists’ letter. Emily M. Bender, a professor of linguistics at the University of Washington and AI critic, called it a “hot mess,” arguing in part that doomer-ism like this winds up benefiting AI companies by making them seem much more powerful than they are. (See also Max Read on that subject.)
In an embarrassment for a group nominally worried about AI-powered deception, a number of the people initially presented as signatories to the letter turned out not to have signed it. And Forbes noted that the institute that organized the letter campaign is primarily funded by Musk, who has AI ambitions of his own.
There are also arguments that speed should not be our primary concern here. Last month Ezra Klein argued that our real focus should be on these system’s business models. The fear is that ad-supported AI systems prove to be more powerful at manipulating our behavior than we are currently contemplating — and that will be dangerous no matter how fast or slow we choose to go here. “Society is going to have to figure out what it’s comfortable having A.I. doing, and what A.I. should not be permitted to try, before it is too late to make those decisions,” Klein wrote.
These are good and necessary criticisms. And yet whatever flaws we might identify in the open letter — I apply a pretty steep discount to anything Musk in particular has to say these days — in the end I’m persuaded of their collective argument. The pace of change in AI does feel as if it could soon overtake our collective ability to process it. And the change signatories are asking for — a brief pause in the development of language models larger than the ones that have already been released — feels like a minor request in the grand scheme of things.
Tech coverage tends to focus on innovation and the immediate disruptions that stem from it. It’s typically less adept at thinking through how new technologies might cause society-level change. And yet the potential for AI to dramatically affect the job market, the information environment, cybersecurity and geopolitics — to name just four concerns — should gives us all reason to think bigger.
Aviv Ovadya, who studies the information environment and whose work I have covered here before, served on a red team for OpenAI prior to the launch of GPT-4. Red-teaming is essentially a role-playing exercise in which participants act as adversaries to a system in order to identify its weak points. The GPT-4 red team discovered that if left unchecked, the language model would do all sorts of things we wish it wouldn’t, like hire an unwitting TaskRabbit to solve a CAPTCHA. OpenAI was then able to fix that and other issues before releasing the model.
In a new piece in Wired, though, Ovadya argues that red-teaming alone isn’t sufficient. It’s not enough to know what material the model spits out, he writes. We also need to know what effect the model’s release might have on society at large. How will it affect schools, or journalism, or military operations? Ovadya proposes that experts in these fields be brought in prior to a model’s release to help build resilience in public goods and institutions, and to see whether the tool itself might be modified to defend against misuse.
Ovadya calls this process “violet teaming”:
You can think of this as a sort of judo. General-purpose AI systems are a vast new form of power being unleashed on the world, and that power can harm our public goods. Just as judo redirects the power of an attacker in order to neutralize them, violet teaming aims to redirect the power unleashed by AI systems in order to defend those public goods.
In practice, executing violet teaming might involve a sort of “resilience incubator”: pairing grounded experts in institutions and public goods with people and organizations who can quickly develop new products using the (prerelease) AI models to help mitigate those risks.
If adopted by companies like OpenAI and Google, either voluntarily or at the insistence of a new federal agency, violet teaming could better prepare us for how more powerful models will affect the world around us.
At best, though, violet teams would only be part of the regulation we need here. There are so many basic issues we have to work through. Should models as big as GPT-4 be allowed to run on laptops? Should we limit the degree to which these models can access the wider internet, the way OpenAI’s plug-ins now do? Will a current government agency regulate these technologies, or do we need to create a new one? If so, how quickly can we do that?
I don’t think you have to have fallen for AI hype to believe that we will need an answer to these questions — if not now, then soon. It will take time for our sclerotic government to come up with answers. And if the technology continues to advance faster than the government’s ability to understand it, we will likely regret letting it accelerate.
Either way, the next several months will let us observe the real-world effects of GPT-4 and its rivals, and help us understand how and where we should act. But the knowledge that no larger models will be released during that time would, I think, give comfort to those who believe AI could be as harmful as some believe.
If I took one lesson away from covering the backlash to social media, it’s that the speed of the internet often works against us. Lies travel faster than anyone can moderate them; hate speech inspires violence more quickly than tempers can be calmed. Putting brakes on social media posts as they go viral, or annotating them with extra context, have made those networks more resilient to bad actors who would otherwise use them for harm.
I don’t know if AI will ultimately wreak the havoc that some alarmists are now predicting. But I believe those harms are more likely to come to pass if the industry keeps moving at full speed.
Slowing down the release of larger language models isn’t a complete answer to the problems ahead. But it could give us a chance to develop one.
Coming up on the podcast tomorrow morning: Kevin and I sit down in person with Google CEO Sundar Pichai to talk about launching Bard, the AI arms race, and how he thinks about balancing AI risk with competitive pressures. And, of course: did he order the code red?
If you’re not listening to Hard Fork yet — this is the moment.
Apple | Spotify | Stitcher | Amazon | Google
The Center for AI and Digital Policy asked the FTC to investigate OpenAI for potentially violating consumer protection rules, arguing the rollout of GPT4 was “biased, deceptive, and a risk to public safety.” (Adi Robertson / The Verge)
Generative AI could automate a quarter of the work done in the US and Europe, in a major disruption to the labor market, research says. (Delphine Strauss / Financial Times)
Microsoft, Meta, Google, Amazon and Twitter have all recently cut members of their “responsible AI teams,” even as many of the companies race to release AI products. (Cristina Criddle and Madhumita Murgia / Financial Times)
TikTok enlisted three heavyweights from American politics and business, including high profile Obama and Disney staffers, as it tries to convince US authorities against banning the app. (Kirsten Grind and Erich Schwartzel / Wall Street Journal)
Google violated a court order to preserve employee chat logs amid ongoing antitrust litigation over its app store policies, according to a federal judge. (Malathi Nayak / Bloomberg)
Elon Musk tried to meet with FTC chair Lina Khan about Twitter amid the agency’s ongoing investigation into the company’s data practices, but was rebuffed. (David McCabe and Kate Conger / New York Times)
Clearview AI says it has run nearly a million searches for US police. (James Clayton and Ben Derico / BBC)
BuzzFeed tested FreedomGPT, a chatbot that will answer any question free of guardrails, giving a glimpse into what large language models can do when human concerns are removed. (Pranav Dixit / BuzzFeed)
Meta will let Facebook and Instagram users in Europe opt out of some highly personalized ads as part of its plans to limit the impact of a European Union privacy order. (Sam Schechner and Jeff Horwitz / Wall Street Journal)
Meta executives are discussing a company-wide ban on political advertising in Europe. (Javier Espinoza and Cristina Criddle / Financial Times)
Reddit permanently suspended 244 percent more users in 2022 than in 2021 for sharing revenge porn. (Ashley Belanger / Ars Technica)
The UK government published recommendations for the AI industry, urging regulators to come up with “tailored, context-specific approaches that suit the way AI is actually being used in their sectors.” (Ryan Browne / CNBC)
The UK government published a draft Media Bill to bring US streamers like Netflix under its regulatory framework, imposing fines if they break rules around harmful content that have applied to public broadcasters for decades. (Max Goldbart / Deadline)
The UK is moving forward with an in-depth inquiry into Broadcom's acquisition of VMware after the US chipmaker offered no immediate undertakings in response to the government’s antitrust concerns. (Reuters)
Russia is using facial recognition cameras to identify and arrest protestors. (Lena Masri / Reuters)
TikTok users are stepping up to defend the app and its CEO, making videos against the looming ban. About time! (Sapna Maheshwari and Kalley Huang / New York Times)
TikTok, Amazon and YouTube have invested heavily in livestream e-commerce of the kind pioneered by Chinese retail giants, but are struggling to get traction in the United States. (Tracy Wen Liu / Wired)
Google firmly denied allegations that it used ChatGPT data to train Bard. (Sean Hollister / The Verge)
Google is partnering with startup Replit to create an AI-powered coding assistant to take on Microsoft’s GitHub and OpenAI. (Dina Bass / Bloomberg)
Google is adding a new carousel in search results to help users see different perspectives on certain search topics. (Jay Peters / The Verge)
Google is introducing new extreme heat alerts in Search that are designed to surface information to help people stay safe during heat waves. (Aisha Malik / TechCrunch)
Engineers at Google’s Brain AI group are working with employees at DeepMind to develop software to compete with OpenAI, working to overcome a years-long rivalry. (Jon Victor and Amir Efrati / The Information)
Publishers worry fewer people will click through to news sites now that AI chatbots are disrupting Search. (Katie Robertson / New York Times)
People fell for AI chatbots on Replika — then the platform scaled back the bot’s sexual capacity, breaking hearts. (Pranshu Verma / Washington Post)
Meta’s decision to stagger its latest round of layoffs over a couple months is contributing to anxiety within the workforce and distracting people from their work. (Sylvia Varnham O'Regan / The Information)
Microsoft confirmed that more ads are coming to Bing’s AI-powered chatbot. (Jay Peters / The Verge)
Microsoft and Disney both shut down projects related to the metaverse this month. (Meghan Bobrowsky / Wall Street Journal)
Mastodon CEO Eugen Rochko goes deep on his company’s structure with Nilay Patel; the decentralized social network has just five employees. (Nilay Patel / The Verge)
Elon Musk dethroned former President Barack Obama as the most followed person on Twitter. (Emma Roth / The Verge)
Twitter is amplifying hate speech on the For You page. (Faiz Siddiqui and Jeremy B. Merrill / Washington Post)
Twitter has confirmed some of the details and pricing for the new version of its API, including a severely limited free tier for bots. (Karissa Bell / Engadget)
Apple announced its 34th annual Worldwide Developers Conference will take place June 5-9 online. (Juli Clover / MacRumors)
Apple pushed back mass production of its mixed-reality headset, and the device may not appear at WWDC. (Hartley Charlton / MacRumors)
AI chipmaker Cerebras Systems released seven GPT-based large language models for generative AI, trained on its specialized hardware. (Mike Wheatley / SiliconANGLE)
Lemon8, a ByteDance-owned Instagram rival, jumped into the App Store’s Top Charts on Monday. It appears to be benefiting from a ton of paid promotion — the exact same way ByteDance grew TikTok. (Sarah Perez / TechCrunch)
Those good tweets
For more good tweets every day, follow Casey’s Instagram stories.
Talk to us
Send us tips, comments, questions, and slow AI: email@example.com and firstname.lastname@example.org.
good essay. A few extra points though. One, is six months long enough? Two, how are you really gonna stop these guys from experimenting during that period? And three, what about everybody else in the world? Like I don’t know, China?
Here is the possibly immediate risk that seems most concerning -- can you determine how real it is?
Are AI chatbots already polluting the only well we have?
Has the horse already left the barn, and what controls are currently in place? Carl Bergstrom raised this (https://fediscience.org/@ct_bergstrom/110071929312312906) asking "what happens when AI chatbots pollute our information environment and then start feeding on this pollution. As it so often, the case, we didn’t have to wait long to get some hint of the kind of mess we could be looking at. https://www.theverge.com/2023/3/22/23651564/google-microsoft-bard-bing-chatbots-misinformation."
Are we already inhaling our own hallucinating AI fumes, and what is to stop this from becoming an irreversible "tragedy of the information commons" due to poisons we cannot filter out?