YouTube comes for Q

Plus: Twitter and the New York Post, revisited

YouTube comes for Q

I.

On Wednesday Facebook and Twitter sniffed out what appeared to be a disinformation operation, and acted quickly to stop its spread. In the day since the decision, many voices have been heard across the political spectrum. But one consequential voice remained conspicuously silent: YouTube.

As Daisuke Wakabayashi noted at the New York Times, the company would not say whether it had taken any action on videos related to the New York Post’s article. A short video by the Post featuring the article’s key claims reached 100,000 views — a very modest amount, particularly for a story that got 1.2 million engagements on Facebook despite having had its reach “limited” there. YouTube told the Times “it was monitoring videos about the article closely” — and nothing else.

In short, the Post controversy found YouTube in the role that it finds most comfortable in content moderation battles: that of the laggard. Whether the subject is Alex Jones, creator harassment, or hate speech, YouTube has often acted only after its fellow platforms made the first move. There are exceptions, of course — YouTube banned Holocaust denial more than a year before Facebook and Twitter, which both got around to it only this week. But when I think of YouTube’s approach to policy, I generally think of it as a platform prefers to play catch-up.

It was with that in mind that I read about YouTube’s latest major change, announced Thursday morning: a crackdown on QAnon, which comes shortly after Facebook and Twitter made similar moves. In YouTube’s framing, the new policy includes QAnon but is not limited to it. Rather, the policy covers “content that targets an individual or group with conspiracy theories that have been used to justify real-world violence.” (The policy also stops short of a full ban; news coverage and videos that “discuss” QAnon without targeting individuals may be permitted.)

YouTube hastened to add that it has already removed many thousands of QAnon videos under its existing policies. It also updated its recommendation systems in 2018 to show fewer videos containing misinformation, which the company says reduced their spread by 70 percent.

At the same time, the company’s blog post elides YouTube’s own role in helping QAnon go mainstream. As Kevin Roose notes at the Times, that role was significant:

Few platforms played a bigger role in moving QAnon from the fringes to the mainstream than YouTube. In the movement’s early days, QAnon followers produced YouTube documentaries that offered an introductory crash course in the movement’s core beliefs. The videos were posted on Facebook and other platforms, and were often used to draw recruits. Some were viewed millions of times.

QAnon followers also started YouTube talk shows to discuss new developments related to the theory. Some of these channels amassed large audiences and made their owners prominent voices within the movement.

Were it not for the surge in real-world violence — or the actions of Facebook and Twitter — it’s fair to wonder whether YouTube would have taken this action at all. There is no perfect time to ban a conspiracy theory that initially appears ridiculous — but as we discussed when Facebook made its move last week, one good time would have been May 2019. That’s when the FBI warned that QAnon represented a domestic terrorism threat — a warning that proved to be sadly correct.

There can be benefits to taking a slower, more deliberative approach to policymaking on platforms. Each policy implemented by a Facebook or a YouTube requires making difficult trade-offs, and those trade-offs need to be carefully considered — preferably with the help of outside experts, academic researchers, and civil rights groups. And, of course, there is still something to be said for permitting uncomfortable and offensive speech, even if those benefits have been difficult to discern in the maelstrom of 2020.

But there’s also such a thing as moving too slow. Content moderation is not a race — there is no prize for being the first to ban some noxious person or movement — but inattention can kill. The next time the FBI identifies a credible domestic terror movement bubbling up on the platforms, here’s hoping it doesn’t take the platforms more than a year to limit its reach.

II.

I had generally positive things to say about Facebook and Twitter intervening to stop the spread of the Post story, but when I woke up this morning I found that I was in the minority. The word “fiasco” was used a lot. Some were of the feeling that Twitter had “goofed it.” “Unacceptable,” opined Jack Dorsey, who is (checks notes) the chief executive officer of Twitter.

Twitter came in for more grief than Facebook did because it took more aggressive action against the Post story. Where Facebook limited the story’s reach by some unknown amount, Twitter went so far as to block users from posting the link or sharing it via direct message at all. At a time when there is much bad-faith hand-wringing about “censorship,” this was the genuine (banned) article. If you wanted to post a link to the Post, you had to do it somewhere else.

Elated to finally see their worst fears realized, Senate Republicans immediately announced that there would be a hearing about the Post link. On October 23rd Jack Dorsey will appear before them, shame-faced, and get a fresh opportunity to throw his policy and communications teams under the bus.

Meanwhile, the Federal Communications Commission used the moment to announce it would undertake an effort to “clarify” Section 230 of the Communications Act, which gives platforms legal protections to moderate content without being held legally liable for everything their users post. FCC Chairman Ajit Pai said there is “bipartisan support” to reform the law, which is true only in the sense that there is “bipartisan support” for appointing a Supreme Court justice to replace Ruth Bader Ginsberg. The parties want very different things, and have shown no signs of reaching an agreement despite spending much of the year debating various bills on the subject.

The Republican side of this debate has been conducted in such bad faith that I hesitate to amplify it — the Democratic side hasn’t been much better — but fortunately TechDirt’s Mike Masnick has concisely explained the hypocrisy in this FCC announcing it would clarify this particular rule:

For years, FCC Chair Ajit Pai has insisted that the thing that was most important to him was to have a “light touch” regulatory regime regarding the internet. He insisted that net neutrality (which put in place a few limited rules to make sure internet access was fair) was clearly a bridge too far, and had to be wiped out or it would destroy investment into internet infrastructure (he was wrong about that). But now that Section 230 is under attack, he's apparently done a complete reversal.

In short, you can be the party of light-touch, cut-the-red-tape regulation, or the party that wants to legally intervene to prevent companies from exercising speech rights because it benefits you politically, but it’s extremely tacky to do both at the same time. One of the major political stories of the moment is the Republican party refusing to acknowledge the power of others — see the president’s refusal to agree to a peaceful transfer of power should he lose the election — and in the outrage toward Twitter this week, we see another of its dark manifestations.

Which is not to say that Twitter’s actions are beyond reproach. There are plenty of good-faith criticisms you can make; The Verge’s Adi Robertson has some here. Twitter struggled to articulate its policy rationale when it acted; the policy rationale appears to be somewhat contradictory; the policy could limit the sharing of legitimate journalism in the future.

But the best rejoinder to those criticisms is that Twitter’s blocking of the Post link really did work as intended. Yes, it drew extra attention to the story — but much of that attention was focused on the gaps in the Post’s story, the Post’s weak defense of its own reporting, and the likelihood that it was part of a disinformation campaign. (It’s extremely likely!) Just as it did when started asking users to comment before retweeting last week, Twitter added friction.

The story still spread around the internet just fine; an analytics company found 4.5 million mentions of it. But the spread was slowed meaningfully; the truth had time to put its shoes on before Rudy Giuliani’s shaggy-dog story about a laptop of dubious origin made it all the way around the world. If you are a person who has been worried about the collapse of truth in the age of social networks, it seems to me, this is precisely the sort of outcome you have been looking for.

Wherever we are headed, this was always going to be a stop on the journey. Social networks are drafted into disinformation campaigns; social networks take effective action against disinformation campaigns; would-be beneficiaries of said disinformation campaigns scream that their toys are being taken away.

This is the basic logic of bullying. “Bullying creates a moral drama in which the manner of the victim’s reaction to an act of aggression can be used as retrospective justification for the original act of aggression itself,” the late writer David Graeber wrote in 2015. He was talking about all forms of bullying, but he also reflected on how the dynamic pertained to social networks:

Anyone who frequents social media forums will recognize the pattern. Aggressor attacks. Target tries to rise above and do nothing. No one intervenes. Aggressor ramps up attack. Target tries to rise above and do nothing. No one intervenes. Aggressor further ramps up attack.

This can happen a dozen, fifty times, until finally, the target answers back. Then, and only then, a dozen voices immediately sound, crying “Fight! Fight! Look at those two idiots going at it!” or “Can’t you two just calm down and learn to see the other’s point of view?” The clever bully knows that this will happen—and that he will forfeit no points for being the aggressor. He also knows that if he tempers his aggression to just the right pitch, the victim’s response can itself be represented as the problem.

Well, here we are. I prefer to live in a world where platforms confidently exercise editorial judgment — judgment rooted in clearly stated principles, crafted in consultation with a diverse group of experts, and enforced consistently. Others prefer to live in a world with almost no platform-level editorial judgment whatsoever. Only one of those worlds leads to an information environment that will be healthy for the rest of us to live in. And it’s telling just how afraid of that world the bullies are.


The end of Platformer’s free preview is here

I hope you’ve enjoyed the first two weeks of Platformer. There’s something thrilling about waking up every day knowing that I’m working directly for you — and as we head into the election, that work is only going to become more urgent.

To the more than 1,000 of you who signed up during the free preview period, thank you — you’re helping to build a bright new future for independent, reader-supported journalism. If you’ve been waiting for the optimal moment to upgrade your subscription, now’s the time.

Free subscribers: thanks for coming along this far. You’ll continue to receive a weekly edition of Platformer at no charge. I’ll check in with you next week to make my best pitch for membership. But if you’ve already made up your mind, I invite you to click the nice purple button right here:


Governing

President Trump and his backers are building a powerful online apparatus to amplify false claims that the election is “rigged,” and are actively priming the Republican base not to accept election results. An unsettling story from Brandy Zadrozny at NBC News:

Starbird, who with her colleagues has tracked 300 million tweets related to voting and ballots, said rare or old local news articles about improperly thrown-away mail were being exaggerated and reframed in misleading ways by hyperpartisan right-wing news websites, amplified by Twitter influencers and eventually promoted by the White House as proof of some inevitable election fraud. […]

"We're experiencing an acceleration in disinformation," Starbird said. "And I don't think we're even just looking at Election Day but possibly for days and even weeks after, depending on how things go."

Apple is taking extraordinary action to get Telegram to remove individual posts from channels connected to pro-democracy protesters in Belarus. Worse, Apple is pressuring Telegram not to inform users that their posts are being removed. If you think Twitter’s censorship is bad, you really ought to read this one. John Gruber covered it at Daring Fireball:

I’ve said it before and will adamantly say it again: it is prima facie wrong that one of the rules of the App Store is that an app is not allowed to explain the rules of the App Store. I’m hard pressed to think of an exception to this conviction, not just on Apple’s App Store, but in any sphere of life — whether a harmless game or the administration of the law.

A Wall Street Journal stress test of Facebook’s content moderation systems found that the company initially made the wrong call in a significant majority of cases. “When the Journal reported more than 150 pieces of content that Facebook later confirmed violated its rules, the company’s review system allowed the material—some depicting or praising grisly violence—to stand more than three-quarters of the time.” (Jeff Horwitz / Wall Street Journal)

Facebook removed the page of a fringe political party in New Zealand ahead of its election. Advance New Zealand had repeatedly shared misinformation about COVID-19. (Neil Sands and Taylor Thompson-Fuller / AFP)

A trio of Ukrainians on Facebook with more than 200,000 friends and followers has been spreading debunked conspiracy theories about the Biden family for more than a year. Despite the size of their audience, though, their recent posts have gotten little engagement. (Christopher Miller / BuzzFeed)

Democrats tweet more than Republicans, and Trump is Twitter’s top subject. “The median Democrat follows 126 people and has 32 followers, and tweets once a month. Meanwhile, the median Republican follows 71 people, is followed by 21, and never tweets at all.” (David Pierce / Protocol)


Industry

Twitter went down for 90 minutes on Wednesday afternoon. The company said there is no evidence that it was attacked, meaning that instead of being scary the outage was merely embarrassing. (Nick Statt / The Verge)

Snap belatedly added popular music tracks to Snapchat. The feature is limited to iOS and does not include full major-label catalogs, putting it well behind TikTok and Instagram on that front. (Todd Spangler / Variety)

Snap and Pinterest benefited from this summer’s advertiser boycott of Facebook. Ad spending more than doubled year over year on Snap during the period covered by the boycott, and spending on Pinterest rose 40 percent, according to Mediaocean. (Tom Dotan / The Information)

Google added “hum to search.” It was announced today at Search On, a virtual event Google held to demonstrate that it uses search to do more than crush its rivals. (Google)

The latest multiplayer video game to captivate young people is Among Us. The game involves identifying an impostor aboard a spaceship, encouraging players to develop elaborate conspiracy theories before they are murdered. Fun! (Taylor Lorenz / New York Times)

Something is happening involving Google Chat, Google Hangouts, and Google Workspace. I read this blog post several times and, through no fault of the author, am still at a loss. (Jay Peters / The Verge)


Those good tweets


Talk to me

Send me tips, comments, questions, and forbidden Q videos: casey@platformer.news.