A new presidential administration has a way of resetting the conversation.
The election of Donald Trump in 2016 triggered a global reckoning over the power that tech platforms have to spread misinformation and empower right-wing authoritarians. Since Joe Biden took office, I’ve been eager to see how the broader conversation around tech and society would change. And just a few months in, it’s clear that the prevailing narrative has flipped: the big story is no longer about what Big Tech is leaving up — it’s about what the platforms are taking down.
From India to Australia to Palestine, each day brings a new set of stories about outrage over content removals. In some cases, these removals are forced by the government. In others, platform policies put minority groups at a disadvantage, making it harder for their posts to be seen. But whatever the cause, cries of censorship are only growing louder — and how platforms respond will have huge political implications around the world.
There are two strains of outrage related to censorship currently coursing through the platforms. The first are concerns related to governments enacting increasingly draconian measures to prevent their citizens from expressing dissent. While this has long been the norm in countries like China and Russia, the movement has more recently spread to democratically elected governments as well.
In India, for example, the Modi government has implemented new rules that would require encrypted messaging apps like WhatsApp to make messages “traceable” — breaking encryption around the world. (WhatsApp has sued in an effort to prevent this rule from taking effect.) Meanwhile, India’s government initiated a police raid on Twitter headquarters in Delhi after the company accurately labeled a party spokesman’s tweet as a forgery.
India is also the latest country to require platforms to appoint local representatives who can be threatened when a post finds disfavor with the government, part of a movement that advocates for free expression call “hostage-taking laws.”
Australia’s drug regulator is considering referring COVID vaccine misinformation posts to the federal police, after anti-vaccine campaigners targeted a Labor MP who posted about getting the jab.
The alleged crime is not that the anti-vaxxers posted misinformation, exactly, but rather that they falsely suggested their views had been endorsed by the country’s equivalent of the Food and Drug Administration. “The TGA noted it is a criminal offense, punishable by two years in prison, to represent oneself as a commonwealth body or acting on behalf of one,” Karp writes.
Certainly I’m happy to see anti-vaxx fraudsters removed from Facebook. But jailing people over social media posts for reasons other than violent threats would seem to be an escalation of government control over online expression, and the trend bears watching.
The spike in government censorship is not limited to tech platforms; recently, regimes in Zimbabwe and Myanmar have been arresting journalists doing straightforward reporting in those countries. But because tech platforms have often given a voice to dissidents in places that reporters can’t or won’t go, the creeping restrictions on digital speech deserve special scrutiny.
The second and perhaps more novel strain of outrage over censorship relates not to governments but to platforms themselves. During the Trump era, it became an article of faith among conservatives that they were being censored by Big Tech, and that this censorship was for ideological reasons. (The fact that conservatives benefited hugely from tech platforms, and often dominated lists of the most popular posts on Facebook, never seemed to register.)
Eventually, conservatives came to label any undesirable outcome on social networks as censorship. It wasn’t just posts being removed — it was also their names not appearing high enough in search results, or their tweets not getting enough likes, or losing followers during a purge of QAnon accounts. In 2018, the Republican-controlled House of Representatives held a hearing because the conservative vloggers Diamond and Silk were getting less reach on their Facebook posts than they used to.
Liberals mostly rolled their eyes at all this, and for good reason. But now complaints about “censorship” from the left are on the rise, even when nothing under discussion would seem to qualify as censorship using any rational definition of the term.
Take the current debate over Palestine. Israel’s bombing campaign in Gaza has triggered protests around the world, many of which are playing out on social networks. During this period, platforms have made a variety of technical mistakes that resulted in pro-Palestine content being removed or invisible in search results. There are no real policy issues at play here — Facebook, Twitter, and YouTube take no position against expressing support for Palestine — and yet they have been broadly accused of censorship anyway.
Remarkably, these accusations are already leading to policy changes. Over the weekend, Instagram announced it would change its algorithm to ensure that feed posts that are re-shared as ephemeral stories would be treated equally by its ranking systems as native story posts. Sharing feed posts to stories has becoming a primary way that Instagram is used for activism — the carousel format that is available for feed posts has been adopted by activists as a kind of Instagram analog to tweet storms.
Instagram has historically reduced the reach of these posts, not to censor activism but to promote the growth of stories as a format. But the company has such low trust with users that a commercial move was interpreted as a political one, and the company has scrambled to undo the damage. (The company told the BBC it has been considering this move for a while, and that it was not “only” a reaction to the Gaza issue.)
Even if none of this is “censorship” under the traditional definition of that term, it does highlight a very real issue in global-scale content moderation: speech policies tend to put minority groups at a disadvantage. Elizabeth Dwoskin and Gerrit De Vynck wrote about the issue in the Washington Post:
Some activists say many posts are still being censored. Experts in free speech and technology said that’s because the issues are connected to a broader problem: overzealous software algorithms that are designed to protect but end up wrongly penalizing marginalized groups that rely on social media to build support. Black Americans, for example, have complained for years that posts discussing race are incorrectly flagged as problematic by AI software on a routine basis, with little recourse for those affected.
Despite years of investment, many of the automated systems built by social media companies to stop spam, disinformation and terrorism are still not sophisticated enough to detect the difference between desirable forms of expression and harmful ones.
And lest we dismiss all this as working the refs among activist groups, it’s worth noting that some of the concerns about censorship are coming from Facebook’s own employees. On Tuesday, the Financial Times reported that almost 200 employees had signed a letter demanding an audit of moderation policies, saying pro-Palestinian voices were too often being silenced on the platform.
“As highlighted by employees, the press and members of Congress, and as reflected in our declining app store rating, our users and community at large feel that we are falling short on our promise to protect open expression around the situation in Palestine,” the employees wrote. “We believe Facebook can and should do more to understand our users and work on rebuilding their trust.”
It’s tempting to find irony in a group that spent the past half-decade complaining about platforms being too permissive suddenly accusing those same platforms of cracking down too hard. But I still see a mostly coherent story: one of platforms that helped enable the rise of authoritarians, only to see those authoritarians use their newfound power to crack down on dissent. (While also continuing to use the platforms constantly for self-promotion.) And corporate content moderation, which is designed by necessity to be responsive to government requests, winds up stifling more speech than intended, even if only for technical reasons.
What’s clear is that in a world where authoritarianism is on the rise, people around the world continue view social networks as critical venues for protest and debate. One of the biggest questions of the next-half decade will be in how many places Facebook, Twitter, YouTube and others can live up to that ideal. In a growing number of countries, it’s getting harder every day.
Today in news that could affect public perception of the big tech companies.
⬇️ Trending down: Workers at Amazon warehouses are injured at nearly double the rate of workers at non-Amazon warehouses, according to new data from the the Occupational Safety and Health Administration. The news comes barely a month after Jeff Bezos announced he would work to make Amazon “Earth's Best Employer and Earth's Safest Place to Work.” (Jay Greene and Chris Alcantara / Washington Post)
⭐ Some defendants in the January 6 Capitol attack are blaming their actions on election misinformation. But it’s unclear how effective that will be as a defense. Here’s David Klepper at the Associated Press:
Lawyers for at least three defendants charged in connection with the violent siege tell The Associated Press that they will blame election misinformation and conspiracy theories, much of it pushed by then-President Donald Trump, for misleading their clients. The attorneys say those who spread that misinformation bear as much responsibility for the violence as do those who participated in the actual breach of the Capitol.
“I kind of sound like an idiot now saying it, but my faith was in him,” defendant Anthony Antonio said, speaking of Trump. Antonio said he wasn’t interested in politics before pandemic boredom led him to conservative cable news and right-wing social media. “I think they did a great job of convincing people.”
⭐ The Biden administration said it would continue to defend a controversial Trump-era rule that visa applicants have to register their social media handles with the government. The requirement emerged from Trump’s awful “extreme vetting” program for Muslims. (Carrie Decell / Knight Foundation)
⭐ WhatsApp reversed course and said it would not limit functionality for users who do not accept its controversial new terms of service. The ToS nightmare continues for Facebook as it attempts to fight off a hostile Indian government, which is suing to prevent the terms from taking effect. (Jay Peters / The Verge)
Facebook, Google, and other tech giants appear to be mostly complying with India’s heavy-handed new IT ministry rules. Among other steps, the companies have appointed local “compliance” officers that can be harassed by police when someone criticizes the Modi government. (Manish Singh / TechCrunch)
An attempt to build a class-action lawsuit against Facebook has transformed into a three-year “partnership” between complainants and the company. Euroconsumers, a group of European consumer agencies, had promised up to €200 in compensation to more than 300,000 people who signed up; it now seems unlikely they’ll see any money. (Pieter Haeck / Politico)
Unredacted court documents show Google employees discussing how it is virtually impossible for users to shield their location from the company. “Jack Menzel, a former vice president overseeing Google Maps, admitted during a deposition that the only way Google wouldn't be able to figure out a user's home and work locations is if that person intentionally threw Google off the trail by setting their home and work addresses as some other random locations.” (Tyler Sonnemaker / Insider)
Amazon quietly changed its terms of service to permit lawsuits against the company after lawyers flooded the company with more than 75,000 individual arbitration demands. Companies love private arbitration because it allows them to settle cases more quickly, and on terms favorable to them; but now lawyers are realizing they can use this system to soak giants for tens of millions of dollars in legal fees. (This story is hilarious.) (Sara Randazzo / Wall Street Journal)
Apple said it had found no child labor violations in its latest supplier responsibility report. “The company reported it has 93% compliance with its working-hours code, which stipulates working weeks should not exceed 60 hours and overtime should in all cases be voluntary.” (Vlad Savov and Debby Wu / Bloomberg)
⭐ Twitter is launching a local weather service, offering a mix of free and paid content on its suite of creator tools. Led by veteran climate journalist Eric Holthaus, Tomorrow will “produce newsletters and exclusive long-form content on Twitter via the company's newly-acquired newsletter platform Revue, as well as membership-specific short-form content for users, such as ticketed live audio sessions via Twitter Spaces and audience Q&A services.” (Sara Fischer / Axios)
Signal continues to face internal dissent amid its push into cryptocurrency payments, which employees worry could put the encrypted messaging app into the crosshairs of regulators. This report largely echoes the concerns I found when I wrote about the company in January, but delves further into dissatisfaction with CEO Moxie Marlinspike’s leadership style. (Ryan Gallagher / Bloomberg)
A look at the rise of Instagram giveaways, which no one seems to be winning. Promoted by celebrities and used to rapidly acquire followers, it’s often unclear if the giveaways are legitimate. (Allie Jones / Vox)
Those good tweets
Talk to me
Send me tips, comments, questions, and censored protests: firstname.lastname@example.org.