It's the last weekend of our big sale celebrating Platformer's new home! New subscribers can get 20 percent off the first year of an annual subscription at this link.
Is it too early to say that, on balance, generative artificial intelligence has been bad for the internet?
One, its rise has led to a flood of AI-generated spam that researchers say now outperforms human-written stories in Google search results. The resulting decline in advertising revenue is a key reason that the journalism industry has been devastated by layoffs over the past year.
Two, generative AI tools are responsible for a new category of electioneering and fraud. This month synthetic voices were used to deceive in the New Hampshire primary and Harlem politics. And the Financial Times reported that the technology is increasingly used in scams and bank fraud.
Three — and what I want to talk about today — is how generative AI tools are being used in harassment campaigns.
The subject gained wide attention on Wednesday when sexually explicit, AI-generated images of Taylor Swift flooded X. And at a time when the term “going viral” is wildly overused, these truly did find a huge audience.
Here’s Jess Weatherbed at The Verge:
One of the most prominent examples on X attracted more than 45 million views, 24,000 reposts, and hundreds of thousands of likes and bookmarks before the verified user who shared the images had their account suspended for violating platform policy. The post was live on the platform for around 17 hours prior to its removal.
But as users began to discuss the viral post, the images began to spread and were reposted across other accounts. Many still remain up, and a deluge of new graphic fakes have since appeared. In some regions, the term “Taylor Swift AI” became featured as a trending topic, promoting the images to wider audiences.
At its most basic level, this is a story about X, and not a particularly surprising one at that. When Elon Musk took over X, he dismantled its trust and safety teams and began enforcing its written policies — or not — depending on his whims. The resulting chaos has caused advertisers to flee and regulators to open investigations around the world. (X didn't respond to my request for comment.)
Given those circumstances, it's only natural that the platform would be flooded with graphic AI-generated images. While it is rarely discussed in polite company, X is one of the biggest porn apps in the world, thanks to its longstanding policy allowing explicit photos and videos and Apple's willingness to turn a blind eye to a company that has long flouted its rules. (X is officially rated 17+ for "Infrequent/Mild Sexual Content and Nudity," a historic understatement.)
Separating consensual, permissible adult content from AI-generated harassment requires strong policies, dedicated teams and rapid enforcement capabilities. X has none of those, and that's how you get 45 million views on a single post harassing Taylor Swift.
It would be a mistake, though, to consider Swift's harassment this week solely through the lens of X's failure. A second, necessary lens is how platforms that have rejected calls to actively moderate content have created a means for bad actors to organize, create harmful content, and distribute it at scale. In particular, researchers now have repeatedly observed a pipeline between the messaging app Telegram and X, where harmful campaigns are organized and created on the former and then distributed on the latter.
And indeed, the Telegram-to-X pipeline also brought us the Swift deepfakes, report Emanuel Maiberg and Samantha Cole at 404 Media:
Sexually explicit AI-generated images of Taylor Swift went viral on Twitter after jumping from a specific Telegram group dedicated to abusive images of women, 404 Media has found. At least one tool the group uses is a free Microsoft text-to-image AI generator. [...]
404 Media has seen the exact same images that flooded Twitter last night posted to the Telegram a day earlier. After the tweets went viral, people in the group also joked about how the attention the images were getting on Twitter could lead to the Telegram group shutting down.
I'd say there's little chance of that, given that Telegram won't even disallow the trading of child sexual abuse material. In any case, with each passing day it becomes clear that Telegram, which has more than 700 million monthly users, deserves as much scrutiny as any other major social platform — and possibly more.
As a final lens through which to consider the Swift story, and possibly the most important, has to do with the technology itself. The Telegram-to-X pipeline described above was only possible because Microsoft's free generative AI tool Designer, which is currently in beta, created the images.
And while Microsoft had blocked the relevant keywords within a few hours of the story gaining traction, soon it is all but inevitable that some free, open-source tool will generate images even more realistic than the ones that polluted X this week.
It would be a gift if this were a story about content moderation: about platforms moving to remove harmful material, whether out of a sense of responsibility or legal obligation.
But generative AI tools are already free to anyone with a computer, and they are becoming more broadly accessible every day. The fact that we now have scaled-up social platforms that enable the spread of harmful content through a combination of policy and negligence only compounds the risk.
And we should not make the mistake of thinking that it is only celebrities like Swift who will suffer.
On 4chan, groups of trolls are watching livestreams of municipal courtrooms and then creating non-consensual nude imagery of women who take the witness stand. This month, nonconsensual nude deepfakes were spotted at the top of Google and Bing search results. Deepfake creators are taking requests on Discord and selling them through their websites. And so far, only 10 states have addressed deepfakes through legislation; there is no federal law prohibiting them. (Those last three links come from NBC's Kat Tenbarge, who has been doing essential work on this beat.)
The rise of this sort of abuse is particularly galling given that researchers have been warning about it for a long time now.
"This is 100% a thing that was “predicted” (obvious) *years* in advance," said Renee DiResta, research manager at the Stanford Internet Observatory, in a post on Threads. "The number of panels and articles where those of us who followed the development of the technology pointed out that yeah, disinformation tactics would change, but harassment and revenge porn and [non-consensual intimate imagery] were going to be the most significant form of abuse."
The past decade offers little hope that Congress will work to pass legislation on this subject in any reasonable amount of time. But they will at the very least have the chance soon to grandstand: on Wednesday, nominal X CEO Linda Yaccarino will make her first appearance before Congress as part of a hearing about child safety. (She'll be joined by the CEOs of Meta, Snap, Discord, and TikTok.)
In 2019, Congress blasted Facebook for declining to remove a video that artificially slowed down then-House Speaker Nancy Pelosi's speech, making her appear to slur her words. Five years later, the manipulated media is much more graphic — and the scale of harm already dwarfs what we saw back then. How many more warnings do lawmakers need to see before they take action?
Generative AI clearly has many positive, creative uses, and I still believe in its potential to do good. But looking back over the past year, it's clear that any benefits we have seen today have come at a high cost. And unless those in power take action, and soon, the number of victims who will pay that cost is only going to increase.
Elsewhere in fakes:
- YouTube removed more than 1,000 videos of deepfaked celebrities pitching scams. Among the celebrities? Taylor Swift, of course. (Jason Koebler / 404 Media)
- The use of AI tools and apps to generate pro-Israel content and mass report pro-Palestinian content is raising concerns over disinformation about the war. (Taylor Lorenz / Washington Post)
- Election misinformation is getting tens of millions of views on X; some of it is authored by Musk himself. (Jim Rutenberg and Kate Conger / New York Times)
On the podcast this week: Kevin and I try to talk Andreessen Horowitz's Chris Dixon out of continuing to invest in crypto. Plus, sorting through AI's effect on the news industry, and the year's first round of HatGPT.
On Tuesday the newsletter for paid subscribers inadvertently pasted the Governing links twice, including over where the Industry links should have gone. We updated those links on the site soon after; if you missed them and want to catch up you can find them here. Sorry about that! And thanks to all the readers who wrote in to point it out.
- The day’s biggest story is that Apple announced a huge slate of changes to its App Store policies to comply with the European Union’s Digital Markets Act:
- Apple will introduce alternative app marketplaces for European iOS users to download apps outside of the App Store. (Juli Clover / MacRumors)
- But the company is reportedly planning on adding new fees and restrictions in Europe on how users download apps outside of the App Store, as app developers consider alternate download methods. (Aaron Tilley, Salvador Rodriguez, Sam Schechner and Kim Mackrael / The Wall Street Journal)
- There’s a new reduced commission structure for iOS apps, but there’s also a new Core Technology Fee that applies for more popular apps, charged per customer. (Benjamin mayo / 9to5Mac)
- Users will have to download third-party marketplaces from their websites, but marketplaces will still have to go through Apple’s approval process. (Jon Porter and David Pierce / The Verge)
- Game streaming apps and services will soon be allowed on the App Store. (Andrew Webster / The Verge)
- The move also means that Apple will allow European users to run alternate browser engines on iOS. Let’s see if Chrome can do to my iPhone what it does to my MacBook battery! (David Pierce / The Verge)
- Epic CEO Tim Sweeney says the changes are an “anticompetitive scheme rife with junk fees” that forces developers into a corner. Still, the company plans to bring the Epic Games Store (and Fortnite) to the iPhone. (Benjamin Mayo / 9to5Mac)
- Spotify is ready to launch in-app purchases in Europe, once a law comes into effect that will prevent Apple from charging additional fees and restricting payment processors. (Ariel Shapiro / The Verge)
- Apple’s lawsuit against the NSO Group over Pegasus spyware’s attacks on iPhone users will proceed in the US, a judge ruled. (Zac Hall / 9to5Mac)
- The FTC launched an inquiry into Microsoft, Amazon, and Google’s investments into OpenAI and Anthropic. (David McCabe / The New York Times)
- OpenAI is going back on a longstanding transparency promise, declining to share its governing documents with the public after initially promising to do so. (Paresh Dave / WIRED)
- Sam Altman has reportedly been in talks with congressional members about where and how to build semiconductor chip factories in the US. (Gerrit De Vynck and Jeff Stein / Washington Post)
- Arati Prabhakar, director of the White House Office of Science and Technology Policy, said the US and China will work together on AI safety in the next few months. Good! (Madhumita Murgia / Financial Times)
- Inside the New Hampshire Voter Integrity Facebook Group on primary day, election deniers spreading conspiracies are running rampant. (David Gilbert / WIRED)
- Content glorifying mass shootings are readily available for minors to view on social media platforms like TikTok, Discord, Roblox, Telegram and X, researchers found. (Moustafa Ayad and Isabelle Frances-Wright / Institute for Strategic Dialogue)
- The Oversight Board overturned Meta’s decision to leave up an Instagram post containing false claims about the Holocaust, saying the post violates its hate speech policies. (Oversight Board)
- Meta is rolling out DM restrictions on Facebook and Instagram for users under 16 to prevent teens from receiving unsolicited messages. (Ivan Mehta / TechCrunch)
- Amazon’s Ring says it will stop letting police request footage from user surveillance cameras, now requiring law enforcement to seek warrants first. Good! (Matt Day / Bloomberg)
- A number of British artists are discussing a class action suit against Midjourney and other AI companies, after a list surfaced of 16,000 artists that AI firms allegedly used to train their models. (James Tapper / The Guardian
- Wikipedia in Russia was forced to shut down following pressure from Vladimir Putin’s government. (Noam Cohen / Bloomberg)
- Kids spent 60 percent more time on TikTok than YouTube last year, a study shows, despite YouTube’s dominance still within the demographic. (Sarah Perez / TechCrunch)
- Apple is seemingly ramping up its efforts to potentially introduce generative AI to iPhones. (Michael Acton / Financial Times)
- Google Gemini now powers conversations in Google Ads, which will help advertisers build and scale Search campaigns more easily. (Aisha Malik / TechCrunch)
- The Circle to Search Feature is being introduced on Google Pixel 8 and 8 Pro devices, allowing users to search highlighted content. Also, users can finally use the built-in thermometer to measure body temperature. (Chris Welch / The Verge)
- Lumiere, Google’s AI video generator, can generate many things, but it apparently performs best when generating animals in ridiculous scenarios. (Benj Edwards / Ars Technica)
- Startup Hugging Face’s AI software will be hosted on Google Cloud, giving access to open source developers. (Julia Love / Bloomberg)
- Microsoft briefly reached a $3 trillion market cap, the second company to do so after Apple. (Ryan Vlastelica / Bloomberg)
- Microsoft laid off 1,900 employees at Activision Blizzard and Xbox, about eight percent of its gaming division. Blizzard president Mike Ybarra is leaving the company. (Tom Warren / The Verge)
- OpenAI is reducing the price of API access and releasing a few new models, as well as a new preview model of GPT-4 Turbo that’s intended to reduce “laziness”. (Devin Coldewey / TechCrunch)
- BeReal is starting to reach out to brands and celebrities, allowing them to sign up as “RealBrands” and “RealPeople”. (Amanda Silberling / TechCrunch)
- Twitch is changing the way it pays creators, including changing its Prime Gaming subscription payouts to a flat rate, resulting in a pay cut for some. (Ash Parrish / The Verge)
- Over 88 percent of top-ranked US news outlets block web crawlers that AI firms use to scrape data for training, data shows, but most leading right-wing media outlets don’t. (Kate Knibbs / WIRED)
- Ads within hundreds of thousands of apps can be used to track physical locations, hobbies, and family members of users, an investigation found. (Joseph Cox / 404 Media)
- Streaming sites with pirated content are on the rise, and are reaching profit margins of almost 90 percent, according to a trade group, bringing in about $2 billion annually. (Thomas Buckley / Bloomberg)
Those good posts
For more good posts every day, follow Casey’s Instagram stories.