Discover more from Platformer
How Facebook does (and doesn’t) shape our political views
Four long-awaited studies paint a muddy picture of social media’s impact on public opinion
Today let’s talk about some of the most rigorous research we’ve seen to date on the subject of social networks’ influence on politics — and the predictably intense debate around how to interpret it.
Even before 2021, when Frances Haugen rocked the company by releasing thousands of documents detailing its internal research and debates, Meta has faced frequent calls to cooperate with academics on social science. I’ve argued that doing so is ultimately in the company’s interest, as the absence of good research on social networks has bred strong convictions around the world that social networks are harmful to democracy. If that’s not true — as Meta insists it is not — the company’s best path forward is to enable independent research on that question.
The company long ago agreed, in principle, to do just that. But it has been a rocky path. The Cambridge Analytica data privacy scandal of 2018, which originated from an academic research partnership, has made Meta understandably anxious about sharing data with social scientists. A later project with a nonprofit named Social Science One went nowhere, as Meta took so long to produce data that its biggest backers quit before producing anything of note. (Later it turned out that Meta had accidentally provided researchers with bad data, effectively ruining the research in progress.)
Despite those setbacks, Meta and researchers have continued to explore new ways of working together. On Thursday, the first research to come out of this work was published.
Three papers in Science and one in Nature sought to understand how the contents of the Facebook news feed affected users’ experiences and beliefs. The studies analyzed data on Facebook users in the United States from September to December 2020, covering the period during and immediately after the US presidential election.
In one experiment, the researchers prevented Facebook users from seeing any “reshared” posts; in another, they displayed Instagram and Facebook feeds to users in reverse chronological order, instead of in an order curated by Meta’s algorithm. Both studies were published in Science. In a third study, published in Nature, the team reduced by one-third the number of posts Facebook users saw from “like-minded” sources—that is, people who share their political leanings.
In each of the experiments, the tweaks did change the kind of content users saw: Removing reshared posts made people see far less political news and less news from untrustworthy sources, for instance, but more uncivil content. Replacing the algorithm with a chronological feed led to people seeing more untrustworthy content (because Meta’s algorithm downranks sources who repeatedly share misinformation), though it cut hateful and intolerant content almost in half. Users in the experiments also ended up spending much less time on the platforms than other users, suggesting they had become less compelling.
By themselves, the findings fail to confirm the arguments of Meta’s worst critics, who hold that the company’s products have played a leading role in the polarization of the United States, putting the democracy at risk. But nor do they suggest that altering the feed in ways some lawmakers have called for — making it chronological rather than ranking posts according to other signals — would have a positive effect.
“Surveys during and at the end of the experiments showed these differences did not translate into measurable effects on users’ attitudes,” Kupferschmidt writes. “Participants didn’t differ from other users in how polarized their views were on issues like immigration, COVID-19 restrictions, or racial discrimination, for example, or in their knowledge about the elections, their trust in media and political institutions, or their belief in the legitimacy of the election. They also were no more or less likely to vote in the 2020 election.”
Against this somewhat muddled backdrop, it’s no surprise that a fight has broken out around which conclusions we should draw from the studies.
Meta, for its part, has suggested that the findings show that social networks have only a limited effect on politics.
“Although questions about social media’s impact on key political attitudes, beliefs, and behaviors are not fully settled, the experimental findings add to a growing body of research showing there is little evidence that key features of Meta’s platforms alone cause harmful ‘affective’ polarization or have meaningful effects on these outcomes,” Nick Clegg, the company’s president of global affairs, wrote in a blog post. “They also challenge the now commonplace assertion that the ability to reshare content on social media drives polarization.”
But behind the scenes, as Jeff Horwitz reports at the Wall Street Journal, Meta and the social scientists have been fighting over whether that’s true.
The leaders of the academics, New York University professor Joshua Tucker and University of Texas at Austin professor Talia Stroud, said that while the studies demonstrated that the simple algorithm tweaks didn’t make test subjects less polarized, the papers contained caveats and potential explanations for why such limited alterations conducted in the final months of the 2020 election wouldn’t have changed users’ overall outlook on politics.
“The conclusions of these papers don’t support all of those statements,” said Stroud. Clegg’s comment is “not the statement we would make.”
Science headlined its package on the studies “Wired to Split,” leading to this amazing detail from Horwitz: “Representatives of the publication said Meta and outside researchers had asked for a question mark to be added to the title to reflect uncertainty, but that the publication considers its presentation of the research to be fair.”
Meagan Phelan, who worked on the package for Science, wrote to Meta early this week saying that the journal’s findings did not exonerate the social network, Horwitz reported. “The findings of the research suggest Meta algorithms are an important part of what is keeping people divided,” she wrote.
What to make of all this?
While researchers struggle to draw definitive conclusions, a few things seem evident.
One, as limited as these studies may seem in their scope, they represent some of the most significant efforts to date for a platform to share data like this with outside researchers. And despite valid concerns from many of the researchers involved, in the end Meta did grant them most of the independence they were seeking. That’s according to an accompanying report from Michael W. Wagner, a professor of mass communications at the University of Wisconsin at Madison, who served as an independent observer of the studies. Wagner found flaw in the process — more on those in a minute — but for the most part he found that Meta lived up to its promises.
Two, the findings are consistent with the idea that Facebook represents only one facet of the broader media ecosystem, and most people’s beliefs are informed by a variety of sources. Facebook might have removed “stop the steal”-related content in 2020, for example, but election lies still ran rampant on Fox News, Newsmax, and other sources popular with conservatives. The rot in our democracy runs much deeper than what you find on Facebook; as I’ve said here before, you can’t solve fascism at the level of tech policy.
At the same time, it seems clear that the design of Facebook does influence what people see, and may shift their beliefs over time. These studies cover a relatively short period — during which, I would note, the company had enacted “break the glass” measures designed to show people higher-quality news — and even still there was cause for concern. (In the Journal’s story, Phelan observed that “compared to liberals, politically conservative users were far more siloed in their news sources, driven in part by algorithmic processes, and especially apparent on Facebook’s Pages and Groups.”)
Perhaps most importantly, these studies don’t seek to measure how Facebook and other social networks have reshaped our politics more generally. It’s inarguable that politicians campaign and govern differently now than they did before they could use Facebook and other networks to broadcast their views to the masses. Social media changes how news gets written, how headlines are crafted, how news gets distributed, and how we discuss it. It’s possible that the most profound effects of social networks on democracy lie somewhere in this mix of factors — and the studies released today only really gesture at them.
The good news is that more research is on the way. The four studies released today will be followed by 12 more covering the same time period. Perhaps, in their totality, we will be able to draw stronger conclusions than we can right now.
I want to end, though, on two criticisms of the research as it has unfolded so far. Both come from Wagner, who spent more than 500 hours observing the project over more than 350 meetings with researchers. One problem with this sort of collaboration between academia and industry, he wrote, is that scientists must first know what to ask Meta for — and often they don’t.
“One shortcoming of industry–academy collaboration research models more generally, which are reflected in these studies, is that they do not deeply engage with how complicated the data architecture and programming code are at corporations such as Meta,” he wrote. “Simply put, researchers don't know what they don't know, and the incentives are not clear for industry partners to reveal everything they know about their platforms."
The other key shortcoming, he wrote, is that ultimately this research was done on Meta’s terms, rather than the scientists’. There are some good reasons for this — Facebook users have a right to privacy, and regulators will punish the company mightily if it is violated — but the trade-offs are real.
“"In the end, independence by permission is not independent at all,” Wagner concludes. “Rather, it is a sign of things to come in the academy: incredible data and research opportunities offered to a select few researchers at the expense of true independence. Scholarship is not wholly independent when the data are held by for-profit corporations, nor is it independent when those same corporations can limit the nature of what it studied.”
On the podcast this week: X marks the spot where Twitter died. Plus, what’s really behind WorldCoin’s iris-scanning orb project? Then, Kevin and I journey to Mountain View to explore Google’s robotics lab.
The U.S. Senate Commerce Committee approved the Kids Online Safety Act and COPPA 2.0, bringing the pair of child safety bills one step closer to law despite fierce criticism from free speech and digital rights advocates. (Makena Kelly / The Verge)
Senators Lindsey Graham and Elizabeth Warren penned a joint opinion piece for The Times expressing bipartisan support for more aggressively regulation of Big Tech in the U.S. and imploring Congress to act. They want to name a new regulator for the industry. (Lindsey Graham and Elizabeth Warren / The New York Times)
Anthropic, Google, Microsoft and OpenAI formed a research group they’re calling the Frontier Model Forum to collaborate on AI safety issues, but some critics fear it won’t enact meaningful change. (George Hammond / Financial Times)
AI pioneers Yoshua Bengio and Stuart Russell, alongside Anthropic CEO Dario Amodei, testified at a Congressional hearing on Tuesday warning that AI advancements pose a national security threat. (Gerrit De Vynck / The Washington Post)
GitHub, Hugging Face, Creative Commons and others sent a list of suggestions to the European Parliament to consider when finalizing the AI Act to better support open-source AI development. (Emilia David / The Verge)
The war in Ukraine has spurred major advancements in AI-controlled drone technology, creating concersn that the related knowledge and pilot training could spread to terrorists and drug cartels. (John Hudson and Kostiantyn Khudov / The Washington Post)
The EU formally opened an antitrust investigation into Microsoft over whether it abused its market power by bundling the Teams app with its Office software suite. Slack first filed a complaint about the practice with the European Commission in 2020. (Kim Mackrael / WSJ)
The FTC is in the final stages of preparing its antitrust lawsuit against Amazon and may be filed next month. The agency is said to be focusing on Amazon Prime and the company’s treatment of merchants. (Josh Sisco / Politico)
Meta’s Threads isn’t labeling state-sponsored media, as the company does on Facebook and Instagram, and some Chinese and Russia news accounts have already amassed large followings by posting propaganda. (Newley Purnell / WSJ)
A profile of TikTok CEO Shou Zi Chew details the executive’s recent power grabs, which ultimately led to COO V Pappas’ departure, and his plans to juice profit and growth with e-commerce and live streaming. (Erin Woo and Juro Osawa / The Information)
An alarming alcohol-related TikTok trend, in which creators are paid in tips to take shots while live streaming, has led to at least one death but little action from the platform’s moderation team. (Jessica Lucas / HuffPost)
Elon Musk reinstated the X/Twitter account of QAnon supporter and far-right troll Dom Lucre after he was suspended for posting child sexual abuse material. Musk claimed that only the company’s moderation team saw the images, but researchers say they remained up for four days. (David Gilbert / Motherboard)
UNESCO called for smartphones to be banned in schools to address classroom distraction, cyberbullying and social media-induced mental health issues. (Patrick Butler and Hibaq Farah / The Guardian)
Google will resume Street View photography in Germany more than a decade after the country’s strict privacy laws led it to suspend its operation. (Aggi Cantrill and Stephanie Bodoni / Bloomberg)
New research from Carnegie Mellon University demonstrated how anyone could easily circumvent safety controls on ChatGPT and other AI tools to generate harmful and dangerous information. (Cade Metz / The New York Times)
Stability AI released its latest text-to-image model, called Stable Diffusion XL 1.0, and has made it available as an open source app on GitHub and on its consumer platform. (Kyle Wiggers / TechCrunch)
The rise of generative AI in film and TV has striking actors and writers concerned they might be facing an existential crisis that could permanently reshape creative work in Hollywood. (Lucas Shaw / Bloomberg)
New listings for AI positions at entertainment firms like Disney and Netflix, the latter of which posted a role paying as much as $900,000 per year, are exacerbating tensions with creatives. (Ken Klippenstein / The Intercept)
News app Artifact is adding an AI-powered text-to-speech feature that will let celebrity voices read you articles. Snoop Dogg and Gwyneth Paltrow are confirmed voice options. (Jay Peters / The Verge)
X, the company formerly known as Twitter, slashed its ad prices by 50% and offered other incentives to try and win back advertisers. At the same, the company threatened brands with the loss of their verification status if they don’t buy $1,000 worth of ads per month. (Suzanne Vranica and Patience Haggin / WSJ)
X CEO Linda Yaccarino visited Hollywood this past week in an effort to court talent agencies and pitch the platform as a destination for influencers and celebrities. (Hannah Murphy and Christopher Grimes / Financial Times)
X took over the @x handle without informing or financially compensating the original owner. Photographer Gene X Hwang said he was offered merchandise instead. (Sarah Perez / TechCrunch)
Indonesia blocked X.com due to the country’s ban on online pornography, although the Ministry of Communication and Informatics is in touch with the company to discuss lifting the restriction. (Aisyah Llewellyn / Al Jazeera)
Elsewhere in Muskland:
Tesla allegedly rigged the dashboard readouts in its vehicles to inflate the driving range and then created a secret team of employees tasked with burying owner complaints about it by canceling service appointments. I don’t know much about cars. Is that good? (Steve Stecklow and Norihiko Shirouzu / Reuters)
Meta’s recovering revenue growth and aggressive cost-cutting efforts have bought Mark Zuckerberg more time to continue investing in his metaverse ambitions. (Aisha Counts and Alex Barinka / Bloomberg)
Microsoft’s cloud and Office businesses made up for slumps in its Windows and devices segments in its most recent quarter, with Xbox also showing modest growth thanks to software sales. (Tom Warren / The Verge)
YouTube enjoyed a 4.4% year-over-year increase in sales, to $7.67 billion, illustrating a more stable digital ad market. Google also said more than 2 billion logged-in users are watching YouTube Shorts. (Todd Spangler / Variety)
Alphabet CFO Ruth Porat said she would step down from her role to become the company’s president and investment officer in charge of its “Other Bets” divisions like Waymo and Verily. (Jennifer Elias, Deirdre Bosa and Kif Leswing / CNBC)
Mastodon said it would start selling merch, including t-shirts and mugs, to help it fund development and continue operating as a non-profit. It also has a Patreon.(Sarah Perez / TechCrunch)
Mastodon client Mammoth said it would begin testing an algorithmic For You tab in an effort to make the decentralized social network more accessible. Good! It needs one. (Sarah Perez / TechCrunch)
Bumble launched a dedicated app for finding new friends in the U.S. and several other markets, but it intends to also continue supporting its BFF mode in the main Bumble app. (Ivan Mehta / TechCrunch)
Those good tweets
For more good tweets every day, follow Casey’s Instagram stories.