The platforms spy a hack-and-leak
In a New York Post report, Facebook and Twitter smell a rat
Today let’s talk about that New York Post story about a laptop that might have belonged to Hunter Biden, Facebook and Twitter’s efforts to reduce the story’s distribution, and why they matter.
In the run-up to the 2020 election, platforms have been preparing for all manner of threats. One that they have warned about with some frequency is the “hack and leak” operation. A hack-and-leak occurs when a bad actor steals sensitive information, manipulates it, and releases it in an effort to influence public opinion. The most famous hack-and-leak is the dissemination of Hillary Clinton’s stolen emails in 2016, which may have affected the outcome of the election.
A hack-and-leak works because it exploits journalists’ natural fondness for writing about secret documents, ensuring that they get wide coverage — sometimes before reporters have a chance to closely examine their provenance. (It turns out that basically all humans have a fondness for reading secret documents, and one reason hack-and-leaks seem particularly threatening in the age of social networks is that platform sharing mechanisms allow these stories to spread around the world more or less instantly.)
Because of the role they play in amplifying big stories, platforms have taken the prospect of a hack-and-leak on the eve of the election quite seriously. And so when the New York Post dropped its story about a laptop of dubious origin containing what purported to be incriminating documents related to Joe Biden and his son, the Spidey senses of platform integrity teams all began to tingle in harmony.
I won’t link to the Post story, but the Daily Beast interviewed the computer repair store owner who apparently brought the laptop to public attention: an avid Trump supporter who invoked the Seth Rich conspiracy theory to explain why he feared for his life after sharing the documents. The man said “a medical condition” prevented him from seeing who actually dropped off the laptop at his repair shop, said he believed it belonged to Hunter Biden because of a sticker on the laptop, and offered at least three different stories about how he got connected to the FBI.
Given that Post’s article has only been live for a half day, there is much that remains unknown at press time: where the laptop came from, whether it has any authentic connection to the Bidens, and whether there’s anything truly incriminating there if so. But in the serialized puzzle-box universe of Fox News, discredited stories about Hunter Biden have been an ongoing storyline. Given the timing of the document dump, and the incoherence of the shop owner’s account, suspicions about the story seem warranted.
In the run-up to the election, platforms have accepted two key responsibilities: to reduce the spread of harmful posts, and to reduce that spread quickly. (A subject we talked about here on Tuesday, with respect to Twitter taking action on Trump tweets.) A hack-and-leak operation represents one of the most difficult tests of this commitment — the operation is designed to spread far and wide long before all the real facts can be known.
Facebook has reduced the reach of a New York Post story that makes disputed claims about Vice President Joe Biden’s son, Hunter, pending a fact-check review. “While I will intentionally not link to the New York Post, I want be clear that this story is eligible to be fact checked by Facebook’s third-party fact checking partners. In the meantime, we are reducing its distribution on our platform,” tweeted Facebook policy communications manager Andy Stone.
Twitter banned linking to the Post’s report, but it cited a different policy: the site’s rules against posting hacked material. “In line with our hacked materials Policy, as well as our approach to blocking URLs, we are taking action to block any links to or images of the material in question on Twitter,” a spokesperson told The Verge. Clicking existing links will direct users to a landing page that warns them it may violate Twitter guidelines. Twitter used the same strategy in June to ban content from Blueleaks, a collection of leaked documents from police departments.
Just as notable as the fact that the platforms acted here is the speed with which they did so. Stone announced that distribution had been reduced on Facebook within about three hours of the Post story going up, and Twitter acted shortly thereafter. There are cases in which I think platforms should act even faster than this — three hours is too long to decide whether to decide whether a single tweet from a high-profile account violates policy, I think. But to identify a potential hack-and-leak operation and restrict it within a few hours, before it hits the Twitter Trending page, deserves some credit.
Of course, some people have other opinions. For some, the platforms’ actions represent a chilling example of their power over the boundaries of speech on the internet. For others, the actions risk being counter-productive — driving more attention to a story by making it appear to be forbidden knowledge. For still others, the action reeked of inconsistency: why immediately throttle this story, say, but not the New York Times’ articles on the president’s tax returns, which are also of uncertain provenance?
And for conservative members of Congress, who seize on any negative outcome on social media as proof positive of a vast conspiracy against them, Wednesday was a bonanza. Sen. Josh Hawley, who along with Ted Cruz leads the conservative grievance brigade in the Senate, announced that he would investigate the platforms’ action as a possibly illegal contribution to the Biden campaign. (Which would make Fox News’ prime-time lineup what, exactly?)
For its part, the Biden campaign denied the substance of the report, and a former deputy assistant secretary of defense who serves as a Biden adviser called the Post story “a Russian disinformation campaign.” In a truly shameful move, the Post refused to defend its own reporting, instead referring questions to an editorial it wrote about how the mean platforms had “censored” it.
There is a difference, of course, in reducing the spread of an article and removing it from the platform entirely. Facebook allowed the links to remain, but throttled their algorithmic promotion while its fact-checkers investigate. This move grants the Post a right to speak without giving it the reach that, should it all indeed to turn out to be a disinformation campaign, it would not have warranted. This is the entire point of having a “virality circuit breaker,” which Facebook adopted earlier this year.
Twitter’s action was more aggressive, and thus more interesting. It is not entirely clear that blocking all links to the Post is consistent with Twitter’s own stated policies, which as Evelyn Douek pointed out seem to permit reporting on a hack, “or sharing press coverage of hacking.”
On the other hand, five days ago Twitter said it’s working to monitor election integrity issues and will take action when needed. Ideally platforms would take decisive action rooted in principles — so here’s hoping that in coming weeks, Twitter’s written policies catch up to their actions.
Why are hack-and-leak operations so effective in the United States?
In a significant new piece in the New York Times Magazine, Emily Bazelon offers an answer to that and other questions about the First Amendment in the age of platforms. If you’ve followed along with this column for the past couple weeks, or even just yesterday’s piece on Holocaust denial, you’ll want to spend time with this one — it goes deep on how the expansive American vision of free speech can enable actors that threaten our democracy.
A key takeaway is that platforms have a clear role to play in protecting the health of our information ecosystem — but also that other large media companies can play as large, or larger, a role. (A recurring theme for us here lately.) One of the ways Bazelon explores this is by looking at the relative experience of two democracies dealing with Russian hack-and-leak operations. You already know what happened with Wikileaks in America. You may not remember what happened in France:
The French press responded otherwise to a Russian hack in May 2017. Two days before a national election, the Russians posted online thousands of emails from En Marche!, the party of Emmanuel Macron, who was running for president. France, like several other democracies, has a blackout law that bars news coverage of a campaign for the 24 hours before an election and on Election Day. But the emails were available several hours before the blackout began. They were fair game. Yet the French media did not cover them. Le Monde, a major French newspaper, explained that the hack had “the obvious purpose of undermining the integrity of the ballot.”
Marine Le Pen, Macron’s far-right opponent, accused the news media of a partisan cover-up. But she had no sympathetic outlet to turn to, because there is no equivalent of Fox News or Breitbart in France. “The division in the French media isn’t between left and right,” said Dominique Cardon, director of the Media Lab at the university Sciences Po. “It’s between top and bottom, between professional outlets and some websites linked to very small organizations, or individuals on Facebook or Twitter or YouTube who share a lot of disinformation.” The faint impact of the Macron hack “is a good illustration of how it’s impossible to succeed at manipulation of the news just on social media,” said Arnaud Mercier, a professor of information and political communication at the University Paris 2 Panthéon-Assas. “The hackers needed the sustainment of the traditional media.”
That’s why the next turn to watch for in the Post saga is how much — and on what terms — it gets covered in mainstream newspapers and nightly news programs. If they take the bait, and an odor of scandal attaches itself to the Biden campaign, the operation (assuming it is an operation) will have worked. But if they treat the story as the platforms did, with severe skepticism, the story may remain confined to the right-wing conspiracy sphere.
Conservatives will continue to cry censorship, of course. But as Zeynep Tufekci wrote in 2016, flooding every available channel with conspiratorial nonsense may represent an even more pernicious attack on our democracy. It comes cloaked in the mantle of free speech, but it serves only to distract and confuse.
“In this era of information overload, censorship works by drowning us in too much undifferentiated information, crippling our ability to focus,” she wrote. “These dumps, combined with the news media’s obsession with campaign trivia and gossip, have resulted in whistle-drowning, rather than whistle-blowing: In a sea of so many whistles blowing so loud, we cannot hear a single one.”
I hope you all have been enjoying the free preview of Platformer. If you’d like to keep receiving four issues a week, make sure to upgrade your subscription by Monday. Otherwise, you’ll continue to receive one free issue each week — although likely on a different day each time, introducing a disturbing new element of randomness into your already chaotic life.
To stay on a regular schedule, consider upgrading today. Huge thanks to everyone who has already done so!
Today in news that could change public perception of the big tech companies.
⬆️ Trending up: Twitter said it would begin removing posts that deny or distort the Holocaust. The move comes just days after Facebook made a similar change. (Kurt Wagner / Bloomberg)
⬆️ Trending up: Facebook is pushing out voting information to citizens who live outside the country, including military members and their families. Notifications will appear at the top of the News Feed. (Karen Jowers / Military Times)
⭐ In the wake of July’s massive Twitter hack, New York’s top financial watchdog says social networks should be overseen by a dedicated regulator. The attack demonstrated that Twitter in particular cannot be trusted to regulate itself, according to New York’s superintendent of financial services. James Rundle has the story at the Wall Street Journal:
DFS recommended that the new regulator, which could be a part of an existing agency or a stand-alone body, should be allowed to designate the largest social media platforms as systemically important. The label is usually reserved for the very largest banks and institutions underpinning financial markets, which are subject to stronger oversight than their peers.
The ability for deliberate misinformation to spread quickly over social networks, for instance, demonstrates the necessity for greater oversight and dedicated regulation for cybersecurity at these companies, DFS argued.
Conservatives are posting viral videos on Facebook warning of a coming “coup” by Democrats — prompting their followers to threaten violence. One such video by popular right-wing commentator Dan Bongino has nearly 3 million views. (Davey Alba / New York Times)
Twitter suspended accounts that were masquerading as Black Trump supporters. Some of the accounts had tens of thousands of followers, and had been collectively retweeted 265,000 times. (Reuters)
YouTube has banned misinformation about COVID-19 vaccines. The site will remove videos that falsely claim the vaccine will kill people, cause infertility, or implant microchips into people. (Elizabeth Culliford and Paresh Dave / Reuters)
Internet freedom has continued to decline during the pandemic. The annual Freedom of the Net report found that 28 countries used the pandemic as a pretext for limiting critical speech about the government, and 45 countries charged activists, journalists, and other citizens with crimes for online speech related to COVID-19. (Lily Hay Newman / Wired)
QAnon gained popularity in the United Kingdom in part thanks to a Facebook group called Freedom for the Children UK. The group, which attracted more than 13,000 members, used the now-familiar tactic of appealing to people’s desire to “save the children” as a method of smuggling bizarre ideas into the mainstream. (Shayan Sardarizadeh / BBC)
Clarence Thomas says that Section 230 has offered platforms too much legal protection. In an opinion, the Supreme Court justice argued that the court should consider limiting protections when platforms have knowingly distributed illegal content. (Judd Legum / Popular Information)
Disinformation campaigns appear to surging across social networks in Africa. In Guinea, where an election is scheduled for Sunday, networks of paid workers are posting positive messages about the ruling party. (Pauline Bax and Loni Prinsloo / Bloomberg)
Google has made it increasingly difficult for travel companies to compete with it, rivals say. A new vacation-rental listings box placed above search results has greatly diminished their traffic. (Sam Schechner / Wall Street Journal)
Amazon workers say the company is risking their safety by reinstating dangerous workplace productivity quotas despite the pandemic still raging. The company previously told a judge that it would relax the quotas for safety reasons. (Josh Eidelson and Spencer Soper / Bloomberg)
Amazon has found away around a new 2 percent tax on digital sales for UK retailers. Amazon is of course famously good at tax avoidance; in this case the tax is being passed on to sellers on the platform. (Mark Sweney / Guardian)
Zoom introduced a paid online events platform and new integrations into the service, which it’s calling “Zapps.” The company is capitalizing on its outsized success during the pandemic to go after an online events market now being pursued by Facebook and other giants. (Frederic Lardinois / TechCrunch)
Zoom is also preparing to roll out end-to end encryption. (Paul Sawers / VentureBeat)
Facebook is teaming up with Carnegie Mellon to seek new “electrocatalysts” using artificial intelligence. The goal is to find new combinations of elements that can convert excess solar and wind energy into fuels that are easier to store. (Sam Shead / CNBC)
Those good tweets
Talk to me
Send me tips, comments, questions, and old laptops you found at your computer store: firstname.lastname@example.org.