How YouTube got played on Election Day

Facebook and Twitter planned for the actual threat. YouTube didn't

How YouTube got played on Election Day
(Sarah Kuress / Unsplash)

Yesterday, I went to bed wondering what reality I would wake up in. The answer is: an extremely contentious one.

Joe Biden is now leading in the popular vote count and the projected electoral vote count. President Trump, as he has promised to for months now, spent the day attempting to undermine confidence in the election. In states where Trump was leading in the vote count, he prematurely declared victory and demanded that it stop. In states where he was trailing, his campaign said he would pursue a recount.

It was all extremely dispiriting — and left me more anxious, in its own way, than Election Day itself did. It also created an enormous amount of work for tech platforms, as the president, his family, and their media allies sought to flood Facebook, Twitter, and YouTube with misinformation and premature claims of victory.

In this, though, the platforms had an advantage. The president has said for months now that he would not accept a loss in the election, setting the stage for this exact scenario. Like a Bond villain who explains his entire plan in advance, giving the hero ample time to devise a counterattack, Trump gave away his entire game before it began.

Facebook and Twitter took that time and used it to craft policies designed specifically to thwart early victory claims and explain that counting hundreds of millions of ballots takes time. YouTube, though …. not so much.

The first smart thing that Facebook and Twitter did was to think beyond what posts they would remove. Building a healthy information ecosystem — and protecting the integrity of our elections — requires more than rapidly removing the worst posts. It also requires explaining the voting process.

Open the Twitter app and click the search button, and the top headline reads: “Election results: Presidential race is undecided as ballot counting continues, according to AP, ABC, and NBC.” An accompanying live video stream from ABC News offered up-to-the-minute, accurate discussion of the state of the race.

Facebook and Instagram ran notifications at the top of their feeds alerting users that “Votes are being counted,” as Sarah Perez noted at TechCrunch:

Both apps are using the same language for their respective notifications:

“The winner of the 2020 U.S. Presidential Election has not been projected yet. See more updates and learn what to expect as the election process continues.”

Facebook initially said it would not apply labels for posts in which users claimed premature victory in individual states; when observers pointed out that claiming premature victory over individual states was actually the greatest immediate threat to election integrity, the company — to its great credit — quickly reversed course.

YouTube, on the other hand, went with a bare-bones “see election results” box on the top of the mobile feed. A shelf of videos below the box highlighted live election streams from mainstream news outlets. And an accompanying “show me” button kicked users out to the Google search app, where a pre-populated “election search results #election2020” query returned a live feed of results from the Associated Press.

I would describe the efforts from Twitter and Facebook here as intentional: they identified a threat in advance and sought to pre-empt it in the product. YouTube’s approach, on the other hand, I would describe as functional: it linked to high-quality information sources, but did so in the way you would expect if there were no threat to election integrity whatsoever.

The second thing platforms had to do Wednesday was deal with the offending content itself.

Naturally, the offending content started with the president, who continuously made false claims of victory, stoking outrage on the right. Twitter acted with a speed and precision that I have never seen in a decade of covering the platform, applying labels to a wide swathe of Trump tweets, often within 30 minutes or less of him tweeting. Some of those labeled tweets cannot be retweeted, limiting their distribution.

Twitter also discovered a network of 150 spam accounts pushing anti-Biden conspiracy theories and removed it. A few of my friends retweeted a fake Associated Press account falsely saying that the election had been called in Biden’s favor; just minutes after reporters noted this, the account was suspended.

Facebook doesn’t hide offending posts behind labels; instead it places the label underneath them. But this, too, had a positive effect. Scroll through Trump’s page and you see that every time he posted — eight times in the past nine hours — a big, bold label informed users that votes are still being counted.

As for YouTube? Visit Trump’s page and the only additional information you’ll see is a box labeled “U.S. elections” that says “Results may not be final. See the latest on Google.” This was, to be clear, false on Wednesday, and will continue to be throughout the week and arguably even the month: no result will truly be “final” until elections are certified, which in some states will not be until December. Facebook and Twitter’s labels helped to reduce uncertainty about the voting process; YouTube’s false label, which suggested that results might be “final” when they won’t be for weeks, arguably added to it.

And offending content? YouTube surely removed its share, the company told me, including some synthetic media and incitements to violence. But we won’t know the volume at which YouTube acted until it releases its next transparency report. In the meantime journalists identified some serious problems.

Insider identified multiple Election Day live streams in which several channels — including one with more than 1 million followers — broadcast fake election results to thousands of people, while earning advertising revenue on the streams the whole time. YouTube removed the videos after Insider reported them.

Conservative pundit Steven Crowder got 3.5 million views on a 5-hour live stream in which he promoted various debunked voter fraud claims, Kevin Roose noted at the New York Times.

And the the far-right One America News Network posted a video titled “Trump won. MSM hopes you don’t believe your eyes” that has racked up more than 300,000 views. Because YouTube has no policy against prematurely declaring victory, all the video gets is the “results may not be final box” — at a time when, again, the results will not be final for weeks.

Jennifer Elias wrote about the video at CNBC:

Google-owned YouTube’s policies say it will remove content “encouraging others to interfere with democratic processes, such as obstructing or interrupting voting procedures.” Last month, the company tightened its policies to include removing targeted conspiracy theory-driven videos that may result in real-life violence.

YouTube said the video doesn’t violate its “Community Guidelines” but refused to respond to questions about why. The company confirmed it will discontinue ads on the video, however.

Average YouTube ad rates run about $7.60 per 1,000 views. While I assume OANN rates are far lower, at most YouTube might have deprived the company of around $2,300. In the meantime, the video is still eligible to appear in search results and to be promoted in search recommendations. (Even if, the company told me, it likely would not be.)

YouTube designed its election-related content policies so they could be deployed extremely broadly. The company wanted a one-size-fits-all label it could slap on anything remotely election-related. From a certain perspective, this approach grants YouTube both the ability to say that it acted on a vast scale (“we labeled everything!”) while also remaining absolutely politically neutral (“we said nothing!”).

That approach might be effective in a year where there is not a known, specific threat to the integrity of the election. But this year there is: a candidate who announced in advance his intention to deny a losing election result, and a wide array of creators and media outlets prepared to promote that lie relentlessly.

I’d be less exercised about all this had YouTube executives not spent the past week aggressively promoting their election preparedness efforts, arguing they are doing their level-best to promote authoritative sources. (We were even treated to a zero-calorie “diary” of a day in the life of the company’s head of news and civic partnerships during election year, which conveyed how busy and civic-minded everyone at YouTube is while setting aside any substantive discussion of policy or enforcement.)

You don’t get to take a preemptive victory lap on your preparedness while sitting out the actual fight. Twitter and Facebook showed this week that they can work quickly and effectively to tamp down misinformation when it matters most, and to shift course in a hurry when needed. With several weeks of election misinformation surely now heading our way, here’s hoping YouTube will learn from their example.


The Ratio

Today in news that could change public perception of the big tech companies

⬇️ Trending down: Facebook’s live streaming tool was used to broadcast election-related conspiracy theories, employees said. But it’s unclear they got much reach — the example here had 2,500 concurrent viewers. (Ryan Mac and Craig Silverman / BuzzFeed)


Governing

We have our first QAnon caucus in Congress. Marjorie Taylor Greene, who has espoused Q-related conspiracy theories, won a seat in Georgia on Tuesday, in what could present a thorny moderation challenge for platforms. Sarah Mimms reports at BuzzFeed:

Greene — who has espoused QAnon theories, posted a video of herself threatening “antifa terrorists” with an AR-15, and shared a meme of herself with a gun next to Rep. Alexandria Ocasio-Cortez and other members of "the Squad" — won election in Georgia’s 14th District. Decision Desk HQ called the race shortly after polls closed in the state at 7 p.m. ET. She didn’t face any opposition on Tuesday night after her Democratic opponent dropped out of the race.

Voters approved California’s Proposition 24, in an expansion of privacy rights. “The initiative prohibits legislators from weakening the California Consumer Privacy Act, creates a state agency to enforce privacy protections, and gives people more control over how tech companies use their personal information, such as race or health data.” (Dustin Gardiner and Shwanika Narayan / San Francisco Chronicle)

Voters also approved California’s Proposition 22, allowing gig economy companies including Uber and Lyft to continue treating their workers as independent contractors. “The vote resolves the fiercest regulatory battle Uber and Lyft have faced and opens a path for the companies to remake labor laws throughout the country. The fight pit labor groups and state lawmakers against ride-hailing and delivery start-ups that spent $200 million in support of the measure.” (Kate Conger / New York Times)

A North Dakota Republican who died of COVID-19 appears to have been elected to the state house of representatives. David Andahl, who died October 5, received about about 35.53 percent of the total votes. (Harmeet Kaur / CNN)

US Cyber Command and the National Security Agency took unspecified actions against Iran in recent weeks to prevent election interference. The move came after Iranian hackers working for the Islamic Revolutionary Guard Corps posed as a far-right group and sent threatening emails to American voters. (Ellen Nakashima / Washington Post)

The Turkish government fined the largest social media platforms $1.2 million each after they failed to comply with a new law requiring them to appoint “local representatives.” Facebook, Instagram, YouTube, Twitter, and TikTok were fined; appointing such representatives would likely allow Turkey’s authoritarian government to place more pressure on them to limit speech. (Firat Kozok / Bloomberg)

A Milwaukee county supervisor has apologized for using Cardi B’s ‘WAP’ to promote a Weatherization Assistance Program. Sir: you have nothing to apologize for. (WTMJ)


Industry

Tech stocks soared as the possibility of a divided government appeared to limit the possibilities for significant new regulations passing. Alex Wilhelm has the story at TechCrunch:

Futures concerning the tech-heavy Nasdaq Composite index are indicating a 3.4% gain this morning, far above a 1.7% gain that the broader S&P 500 index is currently anticipating.

The market capitalization of some of the world’s most valuable companies have added tens of billions of dollars in value this morning, with Apple  rising 3.9% in pre-market trading, and Microsoft  gaining an even-richer 4.4%.

Tinder is surging during the pandemic, Match Group reported during its earnings this week. Tinder averaged 6.6 million subscribers on average in the September quarter, up from 6.2 million in the previous quarter. (Emily Bary / MarketWatch)

Spotify is rolling out the ability to stream music from your Apple Watch without an iPhone nearby. Some nice news for everyone going on post-election runs. (Jay Peters / The Verge)


Those good tweets


Talk to me

Send me tips, comments, questions, and better YouTube videos: casey@platformer.news.