Twitter adds friction
Can you slow down the internet with quote-tweets?
One problem with the internet as it exists today is that it is too fast. Every day brings a story about a bad thing that spread too quickly for platforms to arrest its growth. We use the language of disease to describe this phenomenon — the internet’s biggest successes are said to have gone viral — and yet it is rarely discussed as a condition in need of a cure.
Increasingly, though, platforms have become more open to the idea of slowing things down. WhatsApp now limits the ability of users to forward messages, for example. TikTok proactively reviews videos before disseminating them on its main feed. Facebook told me recently that it has added a “virality circuit breaker” to ensure that the fastest-spreading posts on the platform are seen by a moderator to ensure they are not in violation of the company’s community standards.
On Friday, Twitter entered the conversation about speed. The company announced that it would take several steps intended to slow tweets down — most notably, by asking users to add their own commentary before blindly re-sharing a post. Here’s Shirin Ghaffary in Recode:
The changes include prompting people not to retweet without adding their own commentary, turning off automatic recommendations for other people’s tweets, and adding more context to its Trending section. Twitter will also start putting more warning labels on misleading tweets by US politicians and accounts with more than 100,000 followers, and block users from “liking” or replying to those tweets. And if a politician declares premature victory before it’s verified by independent sources, Twitter will label the tweet and direct users to its voter information page.
Taken as a whole, the moves represent the sort of significant systemic change that some misinformation experts say is necessary to slow the spread of viral lies on the platform, especially those about the election process and results.
Some of these policies show Twitter playing the sort of pre-emptive election defense that Facebook is — the subject of my Thursday column. Pushing users to add commentary before tweeting, though, is something new. In a blog post, Twitter executives described the value they hope will come from adding friction to one of the platform’s core mechanics.
“We hope it will encourage everyone to not only consider why they are amplifying a tweet, but also increase the likelihood that people add their own thoughts, reactions and perspectives to the conversation,” wrote Vijaya Gadde, who leads trust and safety at Twitter, and Kayvon Beykpour, who leads the product team.
Since arriving on the platform in 2015, quote-tweets have become famous for their hostility. A core use of the feature is to identify a bad tweet and to dunk on it — saying “LOL look at this clown,” or something less polite. This increases the level of bile on Twitter generally, but it also regularly amplifies bad tweets into other people’s timelines. For this reason, critics have called on the company to remove the feature altogether.
Pushing would-be retweeters to the quote-tweet could balance this a bit. In asking people to add a few words before they post, Twitter is effectively asking them to endorse the message they are sharing. To say, “look at this good tweet,” rather than “look at this clown.” And that absolutely will give some people pause, if the number of folks who still have “retweets are not endorsements” in their bios is any indication.
I’m glad Twitter is giving this a shot, even if these and other election-related platform policy changes have an unfortunate flying-by-the-seat-of-our-pants quality. Social networks had four years to get their houses in order for the next presidential election, and they roll out their best thinking … when early voting is already under way? Should it all go disastrously wrong, there won’t be much time to implement fixes before Election Day arrives.
But there’s another consideration I hope Twitter makes as it implements these changes. A corollary of the internet being too fast is that Twitter’s enforcement is still generally too slow. Even when the company flags a politician’s tweet for violating its policies, as it did for an absurd claim by President Trump on Sunday, that flag is generally hours in coming. By the time the flag arrives, the tweet has already spread.
If moderation teams are properly staffed, and policies are clearly written, it should not take hours to enforce them. By now I imagine there must be dozens of Twitter employees who are notified of every Trump tweet as it is posted. In the run-up to the election, is there any reason why they should not be able to take any action that is warranted within 60 minutes? What would it take for them to do it in 30?
As they take steps to slow the internet down, platforms are encouraging us to judge them by how effectively they reduce the spread of harmful posts. But it’s not enough say that you slowed the spread — you have to slow it quickly, too.
Coming tomorrow: Facebook bans Holocaust deniers.
On email length
Thanks to everyone who wrote in with their thoughts on Platformer’s length, and whether I ought to write shorter to accommodate the truncation that happens on Gmail and other platforms. I spent much of the weekend writing back to the dozens of you who weighed in — it means a lot to me that so many people took time out to share their opinion.
In this case, your opinion is very clear: most of you like your Platformer long. Many of you told me that you can get an emailed list of bullet points about the day anywhere, but that Platformer stands out for its more comprehensive coverage. For that reason, you can continue to expect updates in the 2,500-word range.
If that’s too much, I wanted to share two alternate ways of reading Platformer. One is to read only the starred links — I add a star next to two or three stories every day that I think are truly worth your time. Two is to skip reading the links altogether — if it’s truly important, it will probably end up in a column at some point.
I’m also interested in recording an audio version of each day’s column so you can listen as you commute or do your dishes. For that, though, I’ll need some editing help. So if that appeals to you, please consider becoming a member today.
Today in news that could change public perception of the big tech companies.
⬆️ Trending up: Facebook and Instagram will pin vote-by-mail explainers to top of feeds. Users are getting state-specific guidance on how to vote through the mail. (Taylor Hatmaker / TechCrunch)
⬇️ Trending down: Facebook ads promoting disinformation about climate change reached an estimated 8 million people, for a cost of just $42,000. (Scott Waldman / E&E News)
⬇️ Trending down: The National Labor Relations Board accused Google contractor HCL America of illegally discouraging workers from joining a union. A group of about 90 data analysts in Pittsburgh voted to unionize anyway last fall. (Noam Scheiber / New York Times)
⭐ Google could be forced to sell off its Chrome browser and parts of its advertising business under an antitrust lawsuit now being considered by the Justice Department. Can’t say I saw this proposed remedy coming. Here’s Leah Nylen at Politico:
The conversations — amid preparations for an antitrust legal battle that DOJ is expected to begin in the coming weeks — could pave the way for the first court-ordered break-up of a U.S. company in decades. The forced sales would also represent major setbacks for Google, which uses its control of the world’s most popular web browser to aid the search engine that is the key to its fortunes.
Discussions about how to resolve Google’s control over the $162.3 billion global market for digital advertising remain ongoing, and no final decisions have been made, the people cautioned, speaking anonymously to discuss confidential discussions. But prosecutors have asked advertising technology experts, industry rivals and media publishers for potential steps to weaken Google’s grip.
A coalition of Western governments is once again calling for an end to encryption. The Five Eyes alliance issued similar calls in 2018 and 2019, but now its call for tech companies to add backdoors for law enforcement has been joined by Italy and Japan. (Catalin Cimpanu / ZDNet)
Facebook alerted law enforcement to potential threats from a Michigan militia six months before a group of men associated with the group was charged with conspiring to kidnap Gov. Gretchen Whitmer. It’s bad that the militia organized on Facebook — but it also seems like the fact that the militia did so made it much easier for the FBI to apprehend the accused. (Kurt Wagner and Christian Berthelsen / Bloomberg)
At least five recently removed militia groups appear to have been able to re-join Facebook under different names. (Salvador Hernandez and Ryan Mac / BuzzFeed)
Facebook has struggled to fight bad actors who take simple steps to re-post misinformation that it has already removed. “Purveyors of misinformation have successfully evaded Facebook's content review systems — both human and automated — by taking simple steps such as reposting claims against different-colored backgrounds, changing fonts and re-cropping images,” according to the watchdog group Avaaz. (Brian Fung / CNN)
Pakistan banned TikTok over “immoral and indecent” videos. Another step toward the Splinternet. (Manish Singh / TechCrunch)
A Chinese browser that allowed users to access heavily censored version of YouTube, Facebook and other Western services was removed from app stores after surging to millions of downloads. Tuber, which was available only on Android phones, had briefly raised the prospect that China might be relaxing internet access restrictions. (Bloomberg)
YouTube may be the largest and most consequential platform still hosting QAnon content. While YouTube has removed a thousands of Q-related videos and accounts, it has yet to take the more decisive approach enacted by Facebook and Twitter. (Tom Porter / Business Insider)
QAnon is making huge roads in Germany, in part thanks to YouTube. The movement has an estimated 200,000 German followers across YouTube, Facebook and Telegram. (Katrin Bennhold / New York Times)
Peloton removed QAnon hashtags from the platform. The fitness platform lets users connect by adding hashtags to their profiles, and had recently seen a surge in Q-related tags. Peloton said they represent “hateful content.” (Rachel E. Greenspan / Business Insider)
Joe Biden’s Twitch channel banned the word ‘frack’ and mentions of ‘war crimes.’ Fracking was, of course, a major subject of last week’s vice presidential debate. (Matthew Gault / Vice)
Fox News warps reality by taking fact-based stories and serializing them into a conspiracy-based thriller, this critic writes. The network’s appeal among fans owes a lot to the fact that it is “structured a lot like a serialized puzzle box drama, like Lost or Stranger Things,” she writes. (Emily VanDerWerff / Vox)
Coinbase CEO Brian Armstrong’s response to the Black Lives Matter movement, which has already led 5 percent of his workforce to quit, highlights a latent reactionary streak in Silicon Valley. Companies are wrestling with how much room to make for political discussion during a hyper-polarized time. (Nitasha Tiku / Washington Post)
The (rocky) rise of Clubhouse has inspired a new wave of audio-first startups. Betty Labs, Geneva, Chalk, Rodeo and Spoon are among the startups now raising money among the bold-faced names of venture capital. (Kate Clark / The Information)
Yelp will begin flagging businesses that have been accused of racist behavior. No one really seems to know how this is going to go, but it comes in response to the fact that businesses are often review-bombed after media reports about negative incidents that take place there. If nothing else, this could add useful context — but some worry it will be ripe for abuse. (Tim Carman / Washington Post)
Google is asking creators to tag and track products featured in their videos. It’s the apparent first step in turning YouTube into a major shopping destination. (Mark Bergen and Lucas Shaw / Bloomberg)
Microsoft is letting more employees work from home permanently. When even stodgy old Microsoft is embracing the long-term remote work trend, you know it’s here to stay. (Tom Warren / The Verge)
Some Facebook moderators and being required to return to Accenture offices this week. Some moderation is better or more safely done on site than in remote situations, but the move reflects the class divide between contractors and full-time Facebook employees, who will mostly get to work remotely indefinitely. (Craig Silverman and Ryan Mac / BuzzFeed)
Some of those workers have now signed a petition requesting hazard pay. (Lauren Kaori Gurley / Vice)
Facebook launched a bug bounty “loyalty program.” The more verified bugs you submit, the higher your bonus. (Catalin Cimpanu / ZDNet)
Facebook’s automated systems rejected an ad for onions, declaring the allium to be “overtly sexual.” The business owner believes that “something about the round shapes” made Facebook’s AI think that it was looking at breasts or buttocks. (BBC)
Those good tweets
Talk to me
Send me tips, comments, questions, and thoughtful quote-tweets: email@example.com.
Yes please on an audio version of each day’s column! Only downside is that podcast/audio softwares haven't figured out how to link soundbites. I dream of a world where I could double tap my left earbud as you said "Facebook told me recently that it has added a 'virality circuit breaker'" and it would take me to the audio version of THAT article.
Hey, Casey! Question for you - with twitter allowing people to still RT without adding adding their own text, is there any data to back up that this change will slow the spread of disinformation? Assuming the vast majority of these tweets/RTs come from people and not bots, will changing the UX slightly have much of an impact? I can understand the added friction if the quote rt was mandatory but otherwise it feels similar to how it was before. I can't seem to find anything to back up their claim so wanted to see your take. Thanks!