How Cloudflare got Kiwi Farms wrong
On platforms and the stochastic terrorism loophole
Today let’s talk about Kiwi Farms, Cloudflare, and whether infrastructure providers ought to take more responsibility for content moderation than they have generally taken.
Kiwi Farms is a nearly 10-year-old web forum, founded by a former administrator for the popular QAnon wasteland 8chan, that has become notorious for waging online harassment campaigns against LBGT people, women, and others. It came to popular attention in recent weeks after a well known Twitch creator named Clara Sorrenti spoke out against the recent wave of anti-trans legislation in the United States, leading to terrifying threats and violence against her by people who organized on Kiwi Farms.
Ben Collins and Kat Tenbarge wrote about the situation at NBC:
Sorrenti, known to fans of her streaming channel as “Keffals,” says that when her front door opened on Aug. 5 the first thing she saw was a police officer’s gun pointed at her face. It was just the beginning of a weekslong campaign of stalking, threats and violence against Sorrenti that ended up making her flee the country.
Police say Sorrenti’s home in London, Ontario, had been swatted after someone impersonated her in an email and said she was planning to perpetrate a mass shooting outside of London’s City Hall. After Sorrenti was arrested, questioned and released, the London police chief vowed to investigate and find who made the threat. Those police were eventually doxxed on Kiwi Farms and threatened. The people who threatened and harassed Sorrenti, her family and police officers investigating her case have not been identified.
In response to the harassment, Sorrenti began a campaign to pressure Cloudflare into no longer providing its security services to Kiwi Farms. Thanks to her popularity on Twitch, and the urgency of the issue, #DropKiwiFarms and #CloudflareProtectsTerrorists both trended on Twitter. And the question became what Cloudflare — a company that has been famously resistant to intervening in matters of content moderation — would do about it.
Most casual web surfers may be unaware of Cloudflare’s existence. But the company’s offerings are essential to the functioning of the internet. And it provided at least three services that have been invaluable to Kiwi Farms.
One, Cloudflare made Kiwi Farms faster and thus easier to use, by generating thousands of copies of it and storing it at end points around the world, where they could be more quickly delivered to end users. Two, it protected Kiwi Farms from distributed denial-of-service (DDoS) attacks, which can crash sites by overwhelming them with bot traffic. And third, as Alex Stamos points out here, it hid the identity of their web hosting company, preventing people from pressuring the hosting provider to take action against it.
Cloudflare knew it was doing all this, of course, and it has endeavored to make principled arguments for doing so. Twice before in its history, it has confronted related high-profile controversies in moderation — once in 2017, when it turned off protection for the neo-Nazi site the Daily Stormer, and again in 2019, when it did the same for 8chan. In both cases, the company took pains to describe the decisions as “dangerous” — warning that it would create more pressure on infrastructure providers to shut down other websites, a situation that would likely disproportionately hurt marginalized groups.
Last week, as pressure on the company to do something about Kiwi Farms grew, Cloudflare echoed that sentiment in a blog post. (One that did not mention Kiwi Farms by name.) Here are CEO Matthew Prince and head of public policy Alissa Starzak:
“Giving everyone the ability to sign up for our services online also reflects our view that cyberattacks not only should not be used for silencing vulnerable groups, but are not the appropriate mechanism for addressing problematic content online. We believe cyberattacks, in any form, should be relegated to the dustbin of history.”
It’s admirable that Cloudflare has been so principled in developing its policies and articulating the rationale behind them. And I share the company’s basic view of the content moderation technology stack: that the closer you get to hosting, recommending, and otherwise driving attention to content, the more responsibility you have for removing harmful material. Conversely, the further you get from hosting and recommending, the more reluctant you should be to intervene.
The logic is that it is the people hosting and recommending who are most directly responsible for the content being consumed, and who have the most context on what the content is and why it might (or might not be) a problem. Generally speaking, you don’t want Comcast deciding what belongs on Instagram.
Cloudflare also argues that we should pass laws to dictate what content should be removed, since laws emerge from a more democratic process and thus have more legitimacy. I’m less sympathetic to the company on that front: I like the idea of making content moderation decisions more accountable to the public, but I generally don’t want the government intervening in matters of speech.
However principled these policies are, though, they are undeniably convenient to Cloudflare. They allow the company to rarely have to consider content moderation issues, and this has all sorts of benefits. It helps Cloudflare serve the largest number of customers; keep it out of hot-button cultural debates; and stay off the radar of regulators who are increasingly skeptical of tech companies moderating too little — or too much.
Generally speaking, when companies can push content moderation off on someone else, they do. There’s generally very little upside in policing speech, unless it’s necessary for the survival of the business.
But I want to return to that sentiment in the company’s blog post, the one that says: “Giving everyone the ability to sign up for our services online also reflects our view that cyberattacks not only should not be used for silencing vulnerable groups, but are not the appropriate mechanism for addressing problematic content online.” The idea is that Cloudflare wants to take DDoS and other attacks off the table for everyone, both good actors and bad, and that harassment should be fought in (unnamed) other ways.
Certainly it would be a good thing if everyone from local police departments to national lawmakers took online harassment more seriously, and developed a coordinated strategy to protect victims from doxxing, swatting, and other common vectors of online abuse — while also doing better at finding and prosecuting their perpetrators.
In practice, though, they don’t. And so Cloudflare, inconvenient as it is for the company, has become a legitimate pressure point in the effort to stop these harassers from threatening or committing acts of violence. Yes, Kiwi Farms could conceivably find other security providers. But there aren’t that many of them, and Cloudflare’s decision to stop services for the Daily Stormer and 8chan really did force both operations further underground and out of the mainstream.
And so its decision to continue protecting Kiwi Farms arguably made it complicit in whatever happened to poor Sorrenti, and anyone else the mob might decide to target. (Three people targeted by Kiwi Farms have died by suicide, according to Gizmodo.)
And while we’re on the subject of complicity, it’s notable that for all its claims about wanting to bring about an end to cyberattacks, Cloudflare provides security services to … makers of cyberattack software! That’s the claim made in this blog post from Sergiy P. Usatyuk, who was convicted of running a large DDoS-for-hire scheme. Writing in response to the Kiwi Farms controversy, Usatyuk notes that Cloudflare profits from such schemes because it can sell protection to the victims.
In its blog post, Cloudflare compares itself to a fire department that puts out fires no matter how bad a person the resident of the house may be. In response, Usatyuk writes: “CloudFlare is a fire department that prides itself on putting out fires at any house regardless of the individual that lives there. What they forget to mention is they are actively lighting these fires and making money by putting them out!”
Again, none of this is to say that there aren’t good reasons for Cloudflare to stay out of most moderation debates. There are! And yet it does matter to whom the company decides to deploy its security guards — a service it often provides for free, incidentally — enabling harassment and worse for a small but committed group of the worst people on the internet.
In the aftermath of Cloudflare’s initial blog post, Stamos predicted the company’s stance wouldn’t hold. “There have been suicides linked to KF, and soon a doctor, activist or trans person is going to get doxxed and killed or a mass shooter is going to be inspired there,” he wrote. “The investigation will show the killer's links to the site, and Cloudflare's enterprise base will evaporate.”
Fortunately, it hasn’t yet come to that. But credible threats against individuals did escalate over the past several days, the company reported, and on Saturday Cloudflare did indeed reverse course and stopped protecting Kiwi Farms.
“This is an extraordinary decision for us to make and, given Cloudflare's role as an Internet infrastructure provider, a dangerous one that we are not comfortable with,” Prince wrote in a new blog post. “However, the rhetoric on the Kiwi Farms site and specific, targeted threats have escalated over the last 48 hours to the point that we believe there is an unprecedented emergency and immediate threat to human life unlike we have previously seen from Kiwi Farms or any other customer before.”
It feels like a massive failure of social policy that the safety of Sorrenti and other people targeted by online mobs comes down to whether a handful of companies will agree to continue protecting their organizing spaces from DDoS attacks, of all things. In some ways, it feels absurd. We’re offloading what should be a responsibility of law enforcement onto a for-profit provider of arcane internet backbone services.
“We do not believe we have the political legitimacy to determine generally what is and is not online by restricting security or core Internet services,” the company wrote last week. And arguably it doesn’t!
But sometimes circumstances force your hand. If your customers are plotting violence — violence that may in fact be possible only because of the services you provide — the right thing to do isn’t to ask Congress to pass a law telling you what to do. It’s to stop providing those services.
There isn’t always a clear moment when an edgy forum, full of trolls, tips over into incitement of violence. Instead, far-right actors increasingly rely on “stochastic terrorism” — actively dehumanizing groups of people over long periods of time, suggesting that it sure would be nice if someone did something about “the problem,” confident that some addled member of their cohort will eventually take up arms in an effort to impress their fellow posters.
One reason why this has been so effective is that it is a strategy designed to resist content moderation. It offers cover to the many social networks, web hosts, and infrastructure providers that are looking for reasons not to act. And so it has become a loophole that the far right can exploit, confident that so long as they don’t explicitly call for murder they will remain in the good graces of the platforms.
It’s time for that loophole to close. In general we should resist calls for infrastructure providers to intervene on matters of content moderation. But when those companies provide services that aid in real-world violence, they can’t turn a blind eye until the last possible moment. Instead, they should recognize groups that organize harassment campaigns much earlier, and use their leverage to prevent the loss of life that will now forever be linked to Kiwi Farms and the tech stack upon which it sat.
In its blog posts, Cloudflare refers repeatedly to its desire to protect vulnerable and marginalized groups. Fighting for a free and open internet, one that is resistant to pressure from authoritarian governments to shut down websites, is a critical part of that. But so, too, is offering actual protection to the vulnerable and marginalized groups that are being attacked by your customers.
I’m glad Cloudflare came around in the end. Next time, I hope it will get there faster.
Elsewhere in stochastic terrorism: LibsOfTikTok, which organizes harassment campaigns against trans people on Twitter and Substack, has now led to multiple threats of violence against children’s hospitals across the United States.
The Federal Trade Commission is investigating Amazon’s $1.7 billion purchase of Roomba maker iRobot. (Josh Sisco / Politico)
The Delhi High Court ordered Telegram to disclose mobile numbers, email addresses and IP addresses for channels that allegedly violated copyright laws, rejecting the company’s argument that doing so would violate its privacy policies. (Sofi Ahsan / Indian Express)
The Islamic State has minted its first NFT, raising fears that immutable blockchains will be used to spread recruiting messages in ways that are resistant to being removed. (Ian Talley / Wall Street Journal)
Iranian authorities plan to use facial recognition technology to enforce a new hijab law. (Weronika Strzyżyńska / Guardian)
A look at Kollona Amn, available in both Apple and Google’s app stores, which allows Saudi Arabians to report their fellow citizens for speaking out against the government. Some people reported on the app have received lengthy prison sentences. (Peter Guest / Insider)
A third-party audit of Twitter submitted by whistleblower Peiter “Mudge” Zatko shows the company’s struggles with misinformation, including allowing a QAnon adherent to submit fact checks as part of its Birdwatch program. “In one of the most startling parts of the report, a headcount chart said Site Integrity had just two full-time people working on misinformation in 2021, and four working full-time to counter foreign influence operations.” (Elizabeth Dwoskin, Joseph Menn and Cat Zakrzewski / Washington Post)
Amazon quietly introduced a 72-hour delay in between reviews being posted and appearing on the site, an effort to weed out bots and trolls from ordinary humans. (Adam B. Vary and Jennifer Maas / Variety)
Snap cut a team working on web3 projects as part of its recent massive layoffs. (Emily Nicolle / Bloomberg)
Apple plans to double the size of its digital advertising team, a sign of how App Tracking Transparency has helped enable a massive wealth transfer to the company from Meta and other businesses that relied on third-party tracking. (Patrick McGee / Financial Times)
A look at AI startup Sanas, which uses machine learning to erase the accents of call center workers. Does this lessen bias, or exacerbate it? (Wilfred Chan / Guardian)
A look at the privacy implications of large language models like GPT-3, which will likely expose sensitive personal information as they improve at scraping and understanding the web. (Melissa Heikkilä / MIT Technology Review)
Those good tweets
Talk to me
Send me tips, comments, questions, and DDoS attacks: email@example.com.