OpenAI makes an election plan

Concerns over deepfakes are mounting — but there are reasons for optimism

OpenAI makes an election plan
(Smail Aslanda / Getty Images)

A fresh start

As of today, Platformer is off Substack and publishing on Ghost. Whether you had a free or paid subscription, it should now be moved over to our new provider. To log in, just click or tap “sign in” on platformer.news and you’ll receive an email logging you in. We put together an FAQ for the migration that you may wish to check out; if you’re having any trouble with your subscription or your question isn’t answered there, please email zoe@platformer.news and we’ll get it sorted out.

Before we get to today’s column, I want to express my deep gratitude to the Platformer community for helping us to navigate this decision. In emails, Discord messages, blog post comments, and social media posts, you expressed overwhelming support for our move. After I posted on social media Monday letting you know our new site was live, dozens of you upgraded to paid subscriptions. Thank you to everyone who weighed on this issue — as painful as it has sometimes been, it has also been a moment for us to reflect on our values and seek to live up to them. 

We know many of you have been waiting for us to move before upgrading or renewing your subscription. As a token of thanks for your patience, and in celebration of our new home we’re excited to announce the second-ever sale in Platformer history. For the next week, new subscribers can get 20 percent off the first year of your annual subscription by using this link.

And with that — and hopefully for a very long while — let’s get back to other people’s platforms.


On Monday, amid rising concerns about deepfakes and other ways generative artificial intelligence could threaten democracy, OpenAI outlined its approach to making product policy for global elections. Today let’s talk about what’s on the mind of platforms and regulators — and look at how things are going for each of them so far.

Broadly speaking, everyone seems to be bracing themselves for a rough 2024. In the Financial Times, Hannah Murphy surveys a host of experts and finds that many of them are fearing for the worst. She recounts the story of last year’s Slovakian election, in which a synthetic audio recording purporting to show the liberal opposition leader planning to buy votes and rig the election was shared widely during the run-up to the vote — and in the midst of a moratorium on media coverage of the election. (I’ve touched on the Slovakia story as well.) 

The people responsible for the deepfake haven’t been identified, and it’s difficult to determine how influential the deepfake may have been in deciding the vote. But despite leading in the exit polls, the victim, Michal Šimečka, ultimately lost to his right-wing opponent.

What happened in Slovakia will likely soon occur in many more countries around the world, experts say. 

“The technologies reached this perfect trifecta of realism, efficiency and accessibility,” Henry Ajder, who advises Adobe and Meta on AI issues, told Murphy. “Concerns about the electoral impact were overblown until this year. And then things happened at a speed which I don’t think anyone was anticipating.”

Moreover, the technology is getting better at a time when our ability to identify and remove covert influence campaigns is arguably waning. Some platforms have laid off significant numbers of employees who once worked on election integrity as part of cost-cutting measures; others, like X, have denounced content moderation and promised to do as little of it as possible. Meanwhile, with a “jawboning” case pending before the US Supreme Court, the federal government has stopped sharing information with platforms for fear that putting any pressure on companies to remove content will be seen as a violation of the First Amendment.  

Amid those challenges, platforms have responded with a variety of policies intended to prevent worst-case scenarios like the Slovakian case. While they vary in their details, all of them would prevent someone from attempting to pass off a fake video or audio of a candidate as real.

Of course, establishing a policy is only half the battle. You also have to enforce it, and there have been some worrisome early lapses on that front. 

I continue to be shaken by the news last month that a network of accounts attracted 730,000 subscribers and nearly 120 million views across 30 channels dedicated to promoting pro-China and anti-U.S. narratives. The channel used AI voices to read essays promoting those narratives, presumably so as to sound more American and not betray their true origin. (YouTube said it had shut down “several,” but not all, of the accounts).  

Last week, the Guardian reported that deepfaked videos of United Kingdom Prime Minister Rishi Sunak had reached as many as 400,000 views on Facebook before Meta removed them. “They include one with faked footage of a BBC newsreader, Sarah Campbell, appearing to read out breaking news that falsely claims a scandal has erupted around Sunak secretly earning ‘colossal sums from a project that was initially intended for ordinary citizens,’” the Guardian’s Ben Quinn reported.

The Sunak story tested two of my own assumptions about what to expect from deepfakes this year. One is that heads of state will prove more resilient to deepfake attacks than lesser-known candidates, since the national media will spend more time debunking synthetic media about them, preventing those narratives from taking hold. Two is that platforms will intervene swiftly to remove deepfakes of national figures for fear of reprisal from national governments.

Meta says it did act swiftly in this case, removing most of the Sunak deepfakes before the Guardian’s story published. Still, ideally they would not get hundreds of thousands of views before that happened.

That brings us to OpenAI, which for the first time since ChatGPT captivated the world’s attention will find itself the subject of scrutiny over its own policy responses to election issues.

It’s clear that the company has taken notes on how other platforms handle these questions, and has borrowed best practices from many of them: preventing the creation of chatbots that impersonate real people or institutions, for example, and banning “applications that deter people from participation in democratic processes — for example, misrepresenting voting processes and qualifications (e.g., when, where, or who is eligible to vote) or that discourage voting.” 

OpenAI also differs from some of its peers in banning some novel uses of its technology preemptively, rather than waiting for disaster to strike and disabling it only then. For example, the company decided not to let developers build AI tools that could create highly targeted persuasive messaging — likely giving up millions in revenue from political operations that would have gladly used its APIs to try. 

“We’re still working to understand how effective our tools might be for personalized persuasion,” the company said in its blog post. “Until we know more, we don’t allow people to build applications for political campaigning and lobbying.”

OpenAI also says that this year it plans to integrate more real-time reporting into ChatGPT, which will come with attribution and links to news sources. The company has spent the past several months signing licensing deals with high-quality publishers including the Associated Press and Axel Springer, and is in ongoing talks with several more.

Assuming that ChatGPT gives preference to this licensed content, OpenAI will likely perform as well or better than most search engines or social products when answering election-related queries. It will be much more difficult to game ChatGPT, since the chatbot will be intentionally drawing on a smaller number of approved sources.

I’m glad OpenAI is being thoughtful about its approach, which could serve as a model for other AI developers. At the same time, we should prepare for the fact that not everyone will take the same approach. Smaller startups and open-source projects with fewer guardrails will likely release more permissive tools this year, and how they are used will bear close scrutiny.

But there’s reason for hope there, too. In the United States, as is typical when the country faces a potential new technology threat, Congress has done nothing. But as David W. Chen reported in the New York Times last week, state lawmakers have seen quick success in passing restrictions on the use of generative AI in campaigning. And Democrats and Republicans are coming together to pass these laws. 

Chen writes:

At the beginning of 2023, only California and Texas had enacted laws related to the regulation of artificial intelligence in campaign advertising, according to Public Citizen, an advocacy group tracking the bills. Since then, Washington, Minnesota and Michigan have passed laws, with strong bipartisan support, requiring that any ads made with the use of artificial intelligence disclose that fact.

By the first week of January, 11 more states had introduced similar legislation — including seven since December — and at least two others were expected soon as well. The penalties vary; some states impose fines on offenders, while some make the first offense a misdemeanor and further offenses a felony.

Laws like these ought to reduce the degree to which synthetic media can sway state and local races, where media coverage is more limited and races may be more vulnerable to disruption. They may also — dare to dream — help to build a bipartisan norm against using AI fakery in campaigning.

I’m sure that the desire to protect democracy plays some role in lawmakers’ urgency here. But their quick action also shows that the real way to get lawmakers to pass laws is to play to their self-interest — no one, after all, wants to be the victim of a viral deepfake campaign. 

Chen recounts the story of a Republican legislator in Kentucky who worries that he might be targeted for deepfake attacks based on the fact that he has two pet sheep. “Imagine if it’s three days before the election, and someone says I’ve been caught in an illicit relationship with a sheep and it’s sent out to a million voters,” Rep. John Hodgson told the Times. “You can’t recover from that.”

There remains much to be concerned about. But scanning the landscape, I take heart in how many people are already seeing deepfakes for what they are — a modern-day twist on that age-old threat, the wolf in sheep's clothing.


Platformer Live!

Bay Area readers: Come join us for Zoë's book launch event on February 13 at Manny's in San Francisco! We'd love to meet you in person.


Governing


Industry


Those good posts

For more good posts every day, follow Casey’s Instagram stories.

(Link)

(Link)

(Link)


Talk to us

Send us tips, comments, questions, and deepfake election policies: casey@platformer.news and zoe@platformer.news.