On Friday, amid growing frustration about the spread of vaccine-related misinformation and social platforms’ role in amplifying it, President Biden told a reporter that “Facebook is killing people.” The bold accusation kicked off a firestorm over the weekend as partisans on every side of the issue weighed on to what extent Biden’s statement was true, false, incomplete or misleading.
“Facebook isn’t killing people, these 12 people are out there giving misinformation,” the president said, citing an administration report last week on online coronavirus vaccine misinformation. “That’s what I meant.”
“My hope is that Facebook, instead of taking it personally … that they would do something about the misinformation, the outrageous misinformation about the vaccine,” Biden continued.
The angry accusation from the president, followed by his sheepish withdrawal, marks an embarrassing moment for an administration that seemed to be settling on an honest and useful message for social networks. Earlier on Friday, US Surgeon General Vivek Murthy had called health misinformation “a serious threat to public health,” and said accurately that reducing its spread is “a moral and civic imperative that will require a whole-of-society effort.”
But White House press secretary Jen Psaki singled out Facebook for its role in the prevalence of anti-vaccination sentiment, and a White House source complained to CNN that “the company were either not ‘taking this very seriously, or they are hiding something.’"
Facebook responded forcefully, telling reporters in part: “More than 3.3 million Americans have also used our vaccine finder tool to find out where and how to get a vaccine. The facts show that Facebook is helping save lives. Period."
One reason this conversation feels both necessary and exhausting is that it gathers together so many concerns about Facebook from the past five years and refracts them through a single global crisis. There’s the company’s vast size; its history of unwittingly nurturing popular conspiracy movements; its role in accelerating the collapse of local media; and its spotty record on sharing data that would help us better understand its effects on the world. Underlying all these concerns is the fact that, despite years of angry complaints from Congress and a large handful antitrust investigations into the company, in many ways Facebook remains accountable only to itself.
And, as often happens when we zero in on one or more of these issues, complaints from the White House and its supporters over the weekend left out almost every other relevant factor in the dissolution of our shared sense of reality: Fox News, YouTube, Twitter, the decline of local journalism, and anti-vaccine statements by leading Republican elected officials, to name five.
To be fair, the White House did have a specific complaint about Facebook it wants the company to address: removing anti-vaxx posts and accounts more quickly after they are reported, and focusing on a dozen or so misinformation super-spreaders that it says are responsible for a disproportionate amount of harm. (Facebook says it is already doing this; at least 12 accounts associated with these people have been banned from at least some company services.)
But as Charlie Warzel points out, for the most part the discourse over the weekend was tribal in nature. Everyone retreated to their prior beliefs about the role of social networks, and tweeted accordingly. “The Biden v. Facebook discourse is a flattened, unproductive argument that has been shaped by the very platforms they’re trying to critique/defend,” he writes. “We are all stuck arguing this false binary because this is the way we argue now.”
You can be extremely concerned about how social networks are reshaping society and still be massively disappointed that the country’s top elected official acknowledges that when it comes to the subject, he is basically just freestyling based on … what advisers are telling him? An educated guess? A hunch?
One thing that might help us get out of this false binary is data — empirical evidence of how much good and bad information has been published on Facebook, and by whom, and how many people saw it, and the degree to which Facebook amplified it. Whatever their differences, both Facebook executives and Biden administration officials think of themselves as empiricists, and these facts are not unknowable.
Except they are, because for the most part Facebook doesn’t share them.
Now, it’s true that understanding the scope of what we call “misinformation” is complicated. Some of the news stories that increased vaccine hesitancy this year were true — the Johnson & Johnson vaccine was linked to a small number of blood clots and its rollout was paused, for example, creating a great deal of likely overblown concerns. The point is that getting people excited about vaccines is not simply a matter of removing untrue Facebook posts.
At the start of the pandemic, a group of data scientists at Facebook held a meeting with executives to ask for resources to help measure the prevalence of misinformation about Covid-19 on the social network.
The data scientists said figuring out how many Facebook users saw false or misleading information would be complex, perhaps taking a year a more, according to two people who participated in the meeting. But they added that by putting some new hires on the project and reassigning some existing employees to it, the company could better understand how incorrect facts about the virus spread on the platform.
The executives never approved the resources, and the team was never told why, according to the people, who requested anonymity because they were not authorized to speak to reporters.
Facebook has been doing more in recent years to make data available to outside academic researchers. But most academics I’ve spoken with say that these moves have been inadequate, and in the meantime any research from these partnerships has been slow to publish. And as Frenkel’s report illustrates, it has rebuffed internal researchers who want the company to study the issue more closely.
The company has one good real-time, self-serve tool to help academics and journalists understand popular public content on Facebook: CrowdTangle, a tool it originally acquired to help publishers understand editorial trends on the platform. But as Kevin Roose reported last week, CrowdTangle appears to be under threat: its leader has been sidelined, and the division was recently reorganized into Facebook’s integrity division.
The company told me today that it is committed to CrowdTangle and plans to continue investing in it, but this sure feels like the beginning of the end of CrowdTangle as we know it. It remains to be seen what self-serve tools Facebook will offer to help the world understand what is transpiring there.
I’m not naive enough to think that we could resolve the Biden vs. Facebook debate with data. But I do think the debate would be much better informed, and possibly more productive.
This kind of corporate release of data is nowadays most often called “transparency,” and it’s notable how differently some people within Facebook see the issue. To executives concerned about the way CrowdTangle data is reported, for example, transparency is a communications strategy: “executives argued that Facebook should selectively disclose its own data in the form of carefully curated reports,” Roose reports, “rather than handing outsiders the tools to discover it themselves.”
To CrowdTangle employees I’ve spoken with over the years, though, transparency is more of a social responsibility. Facebook hosts an enormous amount of public content related to civic issues; CrowdTangle employees see its tool as a way for people to understand some of our most important conversations as they are happening. For now, at least, they’re in a position to ensure that we can.
But as in too many other cases, the tool’s availability is ultimately dependent on Facebook’s goodwill. If elected officials are truly concerned that misinformation is killing people, they should consider going beyond angry denunciations. Create legal reporting requirements for social platforms that detail what violations of their own standards they are finding, and encourage them to report at least some of their findings in real time.
Some bills that would require platforms to report various data related to content moderation are now making their way through Congress. And they’re imperfect, as Carly Miller notes in this look at the limits of transparency reporting.
But for almost five years now, the public reckoning over social media has been conducted primarily via heated statements just like the president’s. And while platforms have made plenty of improvements since then, the fundamental power dynamics haven’t changed.
Biden and other officials can run their mouths about Facebook and other platforms all they like. But if we really want to figure out who’s killing whom, somehow this country is going to have to pass some laws.
⭐ An investigation by 17 news organizations found that NSO Group’s spyware was used in attempted and successful hacks of “37 smartphones belonging to journalists, human rights activists, business executives and two women close to murdered Saudi journalist Jamal Khashoggi.” Here are Dana Priest, Craig Timberg and Souad Mekhennet at the Washington Post:
The targeting of the 37 smartphones would appear to conflict with the stated purpose of NSO’s licensing of the Pegasus spyware, which the company says is intended only for use in surveilling terrorists and major criminals. The evidence extracted from these smartphones, revealed here for the first time, calls into question pledges by the Israeli company to police its clients for human rights abuses.
The media consortium analyzed the list through interviews and forensic analysis of the phones, and by comparing details with previously reported information about NSO. Amnesty’s Security Lab examined 67 smartphones where attacks were suspected. Of those, 23 were successfully infected and 14 showed signs of attempted penetration.
Related: NSO Group enabled “zero-click” attacks of Apple iPhones that enabled spying on targets’ devices even simply by sending them silent messages. (Craig Timberg, Reed Albergotti and Elodie Guéguen / Washington Post)
The United States and NATO called out China for malicious cyber attacks, including a March incident in which the country exploited a flaw in Microsoft Exchange. But it’s unclear that simply calling them out will deter them from further attacks. (Ina Fried / Axios)
Two California residents were charged as part of an alleged plot to bomb Facebook and Twitter headquarters, among other targets, after the 2020 election. The alleged conspirators also discussed bombing the governor’s office, police said. (Dave Gershgorn / The Verge)
ByteDance has blocked the registration of new users and creators for its news aggregator app Jinri Toutiao since September after being ordered to do so by regulators. It’s not clear why; the app can still be downloaded and used. (Reuters)
There are few regulations on what app developers can do with your phone contacts if you agree to share them, and some privacy advocates are calling for new rules. Unacknowledged in this article is the fact that contact sharing helps new companies grow, which can bolster competition. (Heather Kelly / Washington Post)
A look at the firms that help advertisers de-anonymize data sets to allow for even more invasive ad targeting. I really wish data brokers experienced even 10 percent of the scrutiny that social networks get; their practices are often deeply unethical. (Joseph Cox / Vice)
⭐ A look at the growing interest among tech firms of building a “metaverse” as the next frontier in digital platforms. Here are John Herrman and Kellen Browning in the New York Times:
Video games like Roblox and Fortnite and Animal Crossing: New Horizons, in which players can build their own worlds, have metaverse tendencies, as does most social media. If you own a non-fungible token or even just some crypto, you’re part of the metaversal experience. Virtual and augmented reality are, at a minimum, metaverse adjacent. If you’ve attended a work meeting or a party using a digital avatar, you’re treading into the neighborhood of metaversality.
Founders, investors, futurists and executives have all tried to stake their claim in the metaverse, expounding on its potential for social connection, experimentation, entertainment and, crucially, profit.
Content moderators for WhatsApp are being forced to sign a document acknowledging that the work could give them post-traumatic stress disorder. Spanish-speaking workers at Accenture’s Austin office say they have been denied bonus pay for being bilingual because Spanish is not considered a “premium language.” (Billy Perrigo / Time)
Apple removed Fakespot from the App Store after Amazon complained the fake review-spotting service created security vulnerabilities. The circumstances are in dispute, but it seems clear Amazon isn’t doing enough to weed out fakes itself.(Sean Hollister / The Verge)
Beams, Quest, and Pludo are among the new wave of apps trying to build networks out of user-generated audio. Facebook has a forthcoming app named Soundbites that aims to do something similar. (Ashley Carman / The Verge)
How Roblox Studio is enabling the creation of next-generation virtual experiences, from simple games to virtual hangout spaces. “Entire towns have sprouted up organically, each with fluctuating property dynamics.” (Lewis Gordon / The Verge)
More than $2.5 billion worth of non-fungible tokens were sold in the first half of 2021, according to a new report. Given their momentum, expect to see them in the big social apps. (Elizabeth Howcroft / Reuters)
Hallo is a new social network from two former WhatsApp executives, including lead pre-acquisition business operations lead Neeraj Arora. Hallo is ad-free, offers encrypted chat, and says it is the first “real relationships network.” (Hallo)
Those good tweets
Talk to me
Send me tips, comments, questions, and CrowdTangle data: firstname.lastname@example.org.