How YouTube failed the 2020 election test
Three takeaways from a post-mortem on misinformation in the 2020 campaign
Today let’s talk about a comprehensive new report on election integrity, and the particularly low marks it gave to one platform in particular.
The 283-page report, which was published today, is entitled “The Long Fuse: Misinformation and the 2020 Election.” It is the final work of a coalition of some of the most respected names in platform analysis in academia and the nonprofit world: the Stanford Internet Observatory, the University of Washington’s Center for an Informed Public, Graphika, and the Atlantic Council’s Digital Forensic Research Lab.
The report builds on work that the partnership did leading up to and after November to identify and counter false narratives about the 2020 US presidential election. It describes its goals this way:
The EIP’s primary goals were to: (1) identify mis- and disinformation before it went viral and during viral outbreaks, (2) share clear and accurate counter-messaging, and (3) document the specific misinformation actors, transmission pathways, narrative evolutions, and information infrastructures that enabled these narratives to propagate.
The hope was that by better understanding how misinformation spreads on social networks, the partnership could push platforms to develop better policy and enforcement tools to reduce the impact of bad actors in the future.
Reading through the report, there’s a lot to be impressed by. Foreign interference, which all but defined the 2016 US presidential election, played almost no perceptible role in 2020. After making huge investments in safety and security, platforms really did get better at identifying fake accounts and state-backed influence campaigns, and generally removed them before they could do much about them.
The flip side of this, of course, is that 2020 gave US platforms an arguably even more difficult problem to confront: the virulent spread of election-related misinformation from domestic sources, most prominently President Trump, his two adult sons, and a potent ecosystem of right-wing publishers and influencers. Perhaps the report’s most crucial finding, however obvious, is that misinformation in 2020 was an asymmetric phenomenon. The lies were primarily by right-wing actors in the hope of overturning the result of an election that, despite all their viral posts to the contrary, saw no widespread fraud.
The report makes clear that the platforms did not cause these lies to be spread. Nor does it seek to make a case that these lies spread primarily through algorithmic amplification. Rather, it places platforms at the center of a dynamic information ecosystem. Sometimes the lies were “top down” — fabricated by Trump and his cronies and then turned into content by partisan media outlets and right-wing influencers. Other times, the lies were “bottom up”: shared by an average citizen as a tweet, a Facebook post, or a YouTube video, which were then spotted by Trumpworld and amplified.
These processes worked to reinforce each other, creating powerful new narratives that ultimately fueled the rise of previously obscure outlets like One America News Network and Newsmax. And in all of that, there is plenty for every platform studied here to answer for.
The report faults platforms for failing to anticipate and “pre-bunk” likely election misinformation; failing to examine the efficacy of their efforts to label misinformation or share those findings with external researchers; and often failing to hold high-profile users accountable for repeated violations of platform policies, among other issues.
Still, in both the report and a 90-minute virtual event that the partnership held Wednesday, I was struck by the unique — and, to my mind, under-discussed — role that YouTube played in the election.
So let’s discuss it.
The day after the election, I wrote here about how YouTube was being exploited by the right wing. Unclear policies, inconsistently applied, combined with opaque or misleading labels, had made YouTube a playground for hyper-partisan outlets. Uniquely among platforms, YouTube’s partner program enabled many of these corrosive videos to earn money for their channels — and for YouTube — through advertising.
The EIP report picks up on all these themes and more, fleshing them out with new data and explaining the special role YouTube played in cross-platform misinformation campaigns.
Here are three key observations from the report.
One, for misinformation narratives tracked by the project using Twitter’s API, YouTube was linked to more than any other platform. For tweets containing links to misinformation, YouTube ranked third among all domains, behind Gateway Pundit and Breitbart. Researchers tracked 21 separate incidents, generating nearly 270,000 retweets, that pointed to YouTube. The next-highest ranking platform, at 17th, was Periscope; Facebook does not appear on the list.
This finding speaks to the way YouTube serves as a powerful library for hoaxes and conspiracy content, which can continuously be resurfaced on Twitter, Facebook, and other platforms via what the report calls “repeat spreaders” like Trump and his sons.
“It was kind of a place for misinformation to hide and be remobilized later,” said Kate Starbird, an associate professor and co-founder of UW’s Center for an Informed Public, in a response to my question during Wednesday’s event. “From our view, it was a core piece of the repeat spreading phenomenon, and a huge piece of the cross-platform disinformation spread.”
YouTube disputes this conclusion, and says its rank on this chart is more of a reflection of the site’s popularity in general than a comment on the accuracy of the information found there. Other sites, including the Washington Post, ranked high on the list because they contained information debunking false claims rather than advancing them. “In fact, the most-viewed election-related content channels are from news channels like NBC and CBS,” YouTube spokesman Farshad Shadloo told me.
Two, YouTube’s library of misinformation was enabled by policies that tended to be more permissive than similar ones from Facebook and Twitter. An analysis of platform policies leading up to the election found that in August 2020, YouTube failed to adopt comprehensive policies related to misinformation about how to vote, incitements to voter fraud, or efforts to delegitimize election results. By the end of October, the only significant change YouTube made was to adopt a comprehensive policy about voting procedures, researchers said.
Meanwhile, Facebook, Twitter, and TikTok all implemented comprehensive policies designed to thwart efforts to delegitimize the election. (In fairness to YouTube, the report’s policy analysis still ranked it above NextDoor and Snapchat, which were found not to have adopted comprehensive policies in any of these areas.)
“YouTube lagged in terms of their implementation,” said Carly Miller, a research analyst at Stanford. “Things were able to propagate on the platform because of that.”
YouTube disagrees with this conclusion as well, and sent me a long list of policy changes it had made over the past year, including some that were copied by its peers. “As we’ve publicly discussed, we don’t agree with EIP’s framing of our policies or our efforts,” Shadloo told me. “Our community guidelines are generally on par with other companies and we launched several products in 2018 and 2019 to raise authoritative content and reduce borderline videos on our site.”
Finally, the report found that every platform struggled to moderate live video in particular. Some videos containing lies about the election attracted millions of views before they received so much as a label.
“All platforms struggle with labeling,” said Nicole Buckley, a research analyst at UW. “But in particular YouTube had issues with adapting to embedding labels in new forms … of content sharing.”
Ultimately, the EIP reached very different conclusions about YouTube’s performance in the 2020 election than YouTube itself did.
“This is a cross-platform, cross-media set of issues where each part of the ecosystem is leveraged in a different way,” Shadloo said, echoing a conclusion drawn by the EIP researchers. “No two platforms face the exact same challenges, and … interventions that make sense for one may not for another.”
On that point, YouTube and the EIP agree. But for the most part, I have the same concerns about the platform that I had in November.
More on election misinformation:
From Issie Lapowsky in Protocol: “In the months before and after the 2020 election, far-right pages that are known to spread misinformation consistently garnered more engagement on Facebook than any other partisan news, according to a New York University study published Wednesday.”
And from Kari Paul in the Guardian: “While 70% of misinformation in English on Facebook ends up flagged with warning labels, just 30% of comparable misinformation in Spanish is flagged, according to a study from the human rights non-profit Avaaz.”
Today in news that could affect public perception of the big tech companies.
🔃 Trending sideways: Amazon had to redo its new iOS app logo after complaints that the initial redesign included an illustration of packing tape that looks like a Hitler mustache. Oops. (Chaim Gartenberg / The Verge)
⭐ Arizona advanced a bill that would require Apple and Google to allow alternative payment options in their app stores. If signed into law, the bill could deprive the tech giants of their 30 percent cut for digital goods. Nick Statt explains at The Verge:
The legislation, a sweeping amendment to Arizona’s existing HB2005, prevents app store operators from forcing a developer based in the state to use a preferred payment system, putting up a significant roadblock to Apple and Google’s ability to collect commissions on in-app purchases and app sales. It will now head to the state senate, where it must pass before its sent to Arizona Gov. Doug Ducey.
The amendment specifically prohibits stores exceeding 1 million downloads from requiring “a developer that is domiciled in this state to use a particular in-application payments system as the exclusive mode of accretive payments from a user.” It also covers users living in Arizona from having to pay for apps using exclusive payment systems, though it’s not immediately clear if that means developers outside Arizona can avoid paying commission to Apple and Google when they sell something to a state resident.
President Biden came out strongly for the Amazon union drive in Alabama, leading organizers to hope his endorsement could prove decisive. Biden delivered a strong pro-union message Sunday in a video posted to Twitter. (April Glaser, Olivia Solon and Cyrus Farivar / NBC)
Facebook will once again allow US advertisers to run political ads starting Thursday. They have been banned since the November US presidential election, save for a short window where ads related to the Georgia Senate run-offs were permitted.(Sarah Fischer / Axios)
Related: Duke researchers say the use of agencies to buy most political ads has made it almost impossible for them to track how those firms are spending money. (Cat Zakrzewski / Washington Post)
Facebook removed five networks from Thailand, Iran, Morocco and Russia in February after finding evidence of coordinated inauthentic behavior. It also found evidence that the military government in Myanmar has sought to evade a recent ban and reestablish itself on Facebook — so far unsuccessfully. (Facebook)
The Facebook Oversight Board will seek access to the company’s recommendation algorithms to better understand how they work. If successful, the move would represent a significant expansion of the board’s oversight, which currently includes only the ability to restore posts that have been wrongfully removed. (Alex Hern / Guardian)
The director of the US Cybersecurity and Infrastructure Agency said it could take the government 18 months to fully recover from the SolarWinds hack. As Microsoft’s Brad Smith told Congress last week: ““Right now, the attacker is the only one who knows the entirety of what they did.” (Patrick Howell O’Neill / MIT Tech Review)
Parler dropped its federal lawsuit against Amazon — but filed another in state court. The new suit, filed in Washington, “alleges defamation and breach of contract by Amazon, specifically citing a provision that gives clients 30 days to remedy any material breach of the contract before service is terminated.” (Russell Brandom / The Verge)
Saudi Arabia launched a coordinated Twitter campaign to undermine the US intelligence community’s conclusion that Crown Prince Mohammed bin Salman “approved” the killing of Jamal Khashoggi in 2018. Twitter has been removing Saudi accounts by the thousands lately amid an uptick in spammy behavior. (Craig Timberg and Sarah Dadouch / Washington Post)
Virginia became the second state in the country to pass a data protection law, after California. While a good thing for consumers, the move threatens to grow the patchwork of privacy legislation that tech companies must now navigate. (Kate Andrews / Virginia Business)
Twitch published its first transparency report. Amid 40 percent growth in channels last year, “enforcement actions increased by 788,000 [in] early 2020 to 1.1 million [in] late 2020, which Twitch says reflects its increase in users.” (Cecilia D’Anastasio / Wired)
China’s ‘Sharp Eyes’ program aims to surveil 100% of public space. If you ever wondered what Amazon’s Ring network would look like if it were mandatory and installed in every town in America, it’s basically this. (Dave Gershgorn / OneZero)
MeWe has attracted tens of thousands of new users from Facebook in Hong Kong, amid an apparent crackdown on political posts in groups that has left users complaining about a lack of clarity in policy enforcement. Pro-democracy supporters are increasingly moving away from mainstream platforms out of necessity. (Eric Cheung / Rest of World)
⭐ Google said it will stop selling ads based on users’ web browsing behavior, a move that reflects growing regulatory scrutiny of targeted advertising. It could also draw further scrutiny to one company’s ability to reshape the ad tech market. Here are Sam Schechner and Keach Hagey in the Wall Street Journal:
The decision, coming from the world’s biggest digital-advertising company, could help push the industry away from the use of such individualized tracking, which has come under increasing criticism from privacy advocates and faces scrutiny from regulators.
Google’s heft means that its move is also likely to stoke a backlash from some competitors in the digital ad business, where many companies rely on tracking individuals to target their ads, measure their effectiveness and stop fraud. Google accounted for 52% of last year’s global digital ad spending of $292 billion, according to Jounce Media, a digital-ad consultancy.
Twitter beat Clubhouse to launching its audio product on Android. That could help Spaces gain traction in the period before Clubhouse launches on Android itself. (Kim Lyons)
Netflix added a TikTok-like feed to its mobile app to showcase short clips of comedy specials for users on the go. It’s the latest sign that TikTok has replaced Snapchat as Silicon Valley’s external product manager. (Todd Spangler / Variety)
Snap CEO Evan Spiegel says the company is poised to grow more than 50 percent a year even without any additional user growth. “Snap is used by approximately 50% of smartphones in the U.S. but accounts for only a single-digit percentage of the advertising market in the U.S.” (Salvador Rodriguez / CNBC)
SoundCloud will begin distributing subscription revenues to artists based on the percentage of time users spend listening to them. These “fan-powered royalties” resemble the way that YouTube distributes Premium revenue to creators. (Ashley Carman / The Verge)
The shift to remote work increasingly means that recruiters advertise that even executive-level hires will not have to live near headquarters. “Artisanal, whose clients include Databricks, Snowflake, and Splunk, handled about 100 executive searches in 2020 for more than 50 companies. None of them required candidates to be at headquarters.” (Alistair Barr / Bloomberg)
Those good tweets
Talk to me
Send me tips, comments, questions, and election misinformation: email@example.com.
Youtube and other platforms that include or specialize in video have the additional problem that they cannot do full text screening of potential misinformation. Since youtube automatically generates captions for some of its videos, especially those in English, use of the captions for additional misinformation screening might be very helpful. It is possible that youtube is already using the captions, though I doubt that they are.