How platforms fared in fighting the last war

Facebook and Twitter learned a lot of lessons in 2016. Did it help in 2020?

How platforms fared in fighting the last war
A woman participates in a protest in support of counting all votes Thursday in Philadelphia, Pennsylvania. (Chris McGrath / Getty Images)

As many predicted it would, the outcome of the 2020 election is taking many days to determine. A surge of misinformation on social networks, led by President Trump and his family, has challenged social networks’ content policies and drawn fresh attention to their enforcement capabilities. It’s too early to answer whether the platforms have risen fully to the 2020 challenge — given the bellicose nature of the Trumps’ pronouncements this week, their hardest work is very likely still ahead of them.

But in the meantime, I think it’s not too early to ask a related question: how did platforms handle the 2016 challenge in 2020?

After all, it was the aftermath of Trump’s election win that reshaped public opinion about tech platforms, drew broad regulatory scrutiny around the world, and spurred the companies to reshape their policy and content moderation efforts. Now that all the ballots have been cast, if not quite tallied, how did they rise to the challenge?

There were two major problems that became visible in the immediate aftermath of 2016. (A third — data privacy — popped up about a year later, in the form of Cambridge Analytica. But data privacy wasn’t really a campaign story this year.)

The first problem is that what came to be called “fake news” — false and hyper-partisan stories, published both by political operatives and profit-hungry spammers — had overwhelmed Facebook. These stories often outperformed real news stories, generating millions of interactions throughout the campaign. Subsequent research found that while fake news likely did not change many votes, it may have accelerated polarization in the United States.

Perhaps worse, from Facebook’s perspective, fake news created a brand problem. If the company’s chief cash cow, the News Feed, came to be seen as a welcome home for hoaxes and spam, the company’s long-term fortunes could be imperiled. And so Facebook developed partnerships with a bipartisan network of fact checkers, added prominent labels to disputed stories, and changed the News Feed algorithm to favor posts from friends over links from publishers.

The result? The News Feed is no longer hospitable to Macedonian teens looking to turn a quick profit by exploiting America’s partisan divide. And whatever the outcome of the 2020 election, it seems unlikely that “fake news” will be identified as a primary reason anybody won. At least, not of the troll-farm variety that got us all so worked up in 2016.

In the 2020 election, misinformation may have been more likely to spread via robocall, text message, and email than it was via social network. That suggests bad actors increasingly find Facebook and Twitter too expensive or time-consuming to use to spread hoaxes — just what those platforms had hoped for when they began working on integrity efforts.

That’s the good news. The bad news is that Facebook is far from a healthy information environment. New York Times journalist Kevin Roose has documented that conservative outrage bait often dominates the top stories of the day in total number of interactions. And while Facebook argues that those interactions don’t accurately reflect what most people see in the News Feed, the company also hasn’t offered any ways for the public or for researchers to see that data for themselves. And so what we do know of news on Facebook is that the popular stuff tends to be more partisan than not; basically everything else is still a question mark.

At the same time, we also now know that whatever false claims social networks amplify, the claims typically do not originate there. As I wrote in the first edition of Platformer, research on disinformation about mail-in voting shows that conspiracy theories travel much more effectively via mass media than they do on tech platforms. (Though the platforms play an important supporting role.)

Meanwhile, tens of thousands of journalists have lost their jobs around the country over the past decade. (Something that platforms’ success in disrupting the digital advertising market had a crucial hand in.) It’s hard to think of a more banal conclusion to a topic like this complicated than to say, with a furrowed brow, “a more holistic approach is needed.” But it is!

The second major problem that Facebook uncovered in 2016 was, famously, foreign interference on the platform. A Kremlin-linked troll farm known as the Internet Research Agency sought so sow division in America, using fake accounts to promote pages encouraging Texas to secede, Black voters to stay home, and protesters and counter-protesters to show up at the same real-world rallies. Those moves were part of a larger Russian campaign to interfere with the election, which also included hacking the Democratic National Committee and disseminating their emails to media outlets.

In response, Facebook dramatically expanded its teams working on what it newly called “platform integrity.” Between full-time employees and contractors working on content moderation, the team swelled to more than 30,000 people by the end of 2018. Among its hires were former US intelligence agency workers who helped Facebook root out influence operations. They began regularly reporting on takedowns at home and abroad. By September of this year, Facebook had taken down a dozen new campaigns from Russia’s IRA alone.

Again: this is good news. But by the 2020 campaign, there was no longer any need for a Russian influence operation to sow discord here. The discord was all homegrown, and the president issued a series of autocratic social media posts demanding that the vote count be stopped before it was complete and baselessly asserting that he had been the victim of fraud.

“Nothing that Russia or Iran or China could say is anywhere near as wild as what the president is saying,” Clint Watts, a former FBI agent who tracks foreign disinformation, told NBC. “We cannot say this time that Russia, Iran or China interfered in a significant way. They don’t need to write fake news this time--we’re making plenty of fake news of our own.”

In short, Facebook and Twitter — and, to a lesser extent, YouTube — met two of the key challenges revealed by the 2016 election. But in 2020, those issues can now be recognized as minor symptoms of much larger problems. And it’s not at all clear that the platforms are in a position to solve them.

It’s one thing to stop misinformation when it’s coming from Macedonian teenagers, and quite another to stop it when it’s coming from the president of the United States. Despite heavy political pressure, both Facebook and Twitter have intervened repeatedly this week to remove and label the president’s false tweets claiming he already won the election. But his message has spread nonetheless, particularly on YouTube, which has no policy against claiming premature victory.

Trump’s message spread because disinformation is an effective political strategy. If you lie constantly — and are supported by a network of enablers, cable news networks, talk radio, and platform amplification and recommendation algorithms — you can amass a huge following. Take your lies to live television, which will air your baseless claims in real time, and you can build it even more.

And so while neither fake news nor foreign influence operations decided the 2020 election, another kind of influence operation made and continues to use make of Facebook, Twitter, and YouTube every day. The platforms are handmaidens in the spread of hyper-partisan information — indeed, in many ways they optimize for it.

And on that front, 2016 and 2020 don’t look very different at all.


The Ratio

Today in news that could change public perception of the big tech companies

⬆️ Trending up: Facebook and TikTok blocked hashtags that were being used to spread election misinformation. #sharpiegate, #stopthesteal, and #riggedelection are among the blocked tags. (Jacob Kastrenakes / The Verge)

⬇️ Trending down: The One America News Network uploaded a new video to YouTube on Thursday prematurely claiming that Trump had won the election. After a reporter alerted the company, it affixed a confusing “Results may not be final” label. The results are not final until the election is certified. (Jennifer Elias / CNBC)


Governing

Fearful of post-election civil unrest, Facebook said it would move forward with implementing new “friction” measures designed to slow the spread of posts across the network in the United States. The company also plans to reduce the distribution of election-related misinformation. Mike Isaac has the story at the New York Times:

The measures, which could be rolled out as soon as Thursday, are a response to heightened strife and social discord on Facebook, these people said. They said there had been more activity by users and Facebook groups to coordinate potentially violent actions over election issues such as voter fraud. President Trump has falsely claimed on social media over the past few days that the election is being “stolen” from him, even while a final result remains unclear.

Related: hashtags and keywords tracked by Facebook in an internal dashboard predict a rising threat of violence. (Ryan Mac and Craig Silverman / BuzzFeed)

Facebook removed Stop the Steal, a group dedicated to the baseless proposition that the election is being stolen, after members used it to call for real-world violence. The group acquired more than 360,000 members in 48 hours — a powerful example of the effectiveness of Facebook’s recommendation algorithms. (Sheera Frenkel / New York Times)

Twitter would no longer grant Trump’s account special privileges were he to lose the presidency, the company confirmed. That raises the prospect that Trump could lose access to his Twitter account if he repeatedly violated company policies after January. (Kurt Wagner / Bloomberg)

The Justice Department sued to prevent Visa from buying analytics service Plaid, saying it would harm competition. The era of meaningful antitrust enforcement is upon us in America once again! (Dan Primack / Axios)

Miles Taylor, the “Anonymous” Trump critic and former Department of Homeland Security staffer, will not return to Google from his leave of absence. Finally, a 2020 outcome that anyone could have predicted. (Hamed Aleaziz and Ryan Mac / BuzzFeed)

Portland, Maine is the latest municipality to ban the use of facial recognition technology. Voters passed an initiative entitling them to a minimum of $1,000 in fees if they are surveilled in violation of the ordinance. (Russell Brandom / The Verge)

America needs a centralized federal elections agency, this piece argues. Having 50 sets of rules adds to the feeling of chaos as ballots are cast, particularly in a hyper-polarized time. (Charlotte Hill and Lee Drutman / New York Times)

Smaller tech companies are in talks to form a lobbying organization that will promote their interests at a time when regulation focused on giants threatens to hurt them disproportionately. Dropbox, Reddit and Etsy are among the companies participating. (Christopher Stern and Nick Bastone / The Information)

Chinese state media is having great fun roasting Americans for their wayward election. “Trump was often referred to as the ‘know-it-all’ while Biden, though less common, was dubbed the ‘political evergreen tree.’” Savage! (Sebastian Skov Andersen and Joyce Leung / Vice)

Millions of people turned to TikTok for virtual election night watch parties. The app proved popular among both liberal and conservative members of Generation Z. (Taylor Lorenz / New York Times)


Industry

WhatsApp is rolling out a new encrypted ephemeral messaging feature around the world. It’s a feature that I expect will quickly attract blowback from regulators who will worry about its potential for facilitating crimes. (It will also be great for activists. Trade-offs! Ingrid Lunden and Manish Singh have the story at TechCrunch:

The 7-day limit will exist regardless of whether the message gets read or not. (The disappearing message clock starts counting when the message is sent, as it does on other apps like Telegram.)

“The way it’s currently designed is to give the sender confidence that after 7 days their message is gone. The messages have no concept of being seen, for them to disappear, so they will disappear regardless of read status,” said the spokesperson.

WhatsApp got permission to expand its payments service in India. It’s a win for Facebook, which has sought to capitalize on WhatsApp’s popularity in India to build a thriving payments business globally. (Manish Singh / TechCrunch)

Meet the Facebook lawyer trying to prevent election chaos. Molly Cutler leads the company’s strategic response team, which was formed after the Cambridge Analytica scandal to coordinate communications amid crisis. (Alex Heath / The Information)

ByteDance is raising fresh capital at a valuation of $180 billion. The company is seeking $2 billion ahead of an initial public offering for some of its businesses. (Bloomberg)

TikTok is testing a learning tab to showcase how-to and educational videos. If it rolls out globally, it will be the app’s third tab, alongside its central For You feed and the feed of users that you are following. (Ingrid Lunden / TechCrunch)


Those good tweets


Talk to me

Send me tips, comments, questions, and premature election postmortems: casey@platformer.news.