An error message is seen as Sen. Rick Scott (R-FL) speaks remotely during a Senate hearing Wednesday on Section 230 of the Communications Decency Act.
Well, we had another hearing with the platform CEOs.
The dream with this sort of thing is that Congress shows up with a full command of the issues, and asks the CEOs good-faith questions about matters of policy and law. And then I’d come along at the end of the day to walk you through the more provocative questions and productive answers, and gesture at what likely policy outcomes we could expect from this exercise in representative democracy.
But “Does Section 230’s Sweeping Immunity Enable Big Tech Bad Behavior?,” a hearing of the Senate Committee on Commerce, Science, and Transportation, was not that kind of exercise. The word “sham” got kicked around a lot, especially by the participants. “Stunt,” too. Some of the Democrats declined to ask any questions at all.
It was not the first of these. In April 2018, House Republicans organized a hearing to investigate why the two conservative vloggers known as Diamond and Silk had experienced a decline in traffic sent to them by Facebook. The most likely explanation was that changes to Facebook’s algorithms often affect traffic patterns, to publishers of all kinds, although we would later learn that the changes made in 2017 largely benefited conservative publishers at the expense of more liberal ones.
In fact, most of the research I have read has suggested that conservatives reap outsized benefits from the existence of social media, which provides ample room for their views to make regular end runs around the mass media. On Wednesday morning, Media Matters published results of a nine-month study showing that right- and left-leaning pages generate engagement at similar rates — but that right-leaning pages generated 43 percent of total interactions by pages posted about American politics, despite making up only 26 percent of posts.
But the platforms are big, and make mistakes, and those mistakes turn into anecdotes. Anecdotes can be merged into a working theory about platform governance, such as that the platforms are biased against conservatives.
And so less than a week before the election, with their candidate trailing in polls and an effort to shake up the race with a story that dozens of former intelligence officials say is likely a Russian disinformation campaign failing to gain traction, Senate Republicans held a hearing to complain about the unfairness of it all.
The theatrics, which often devolved into shouting, meant that the topic of the hearing — the future of a legal shield for online platforms — was barely debated. The event had been billed as a discussion about Section 230 of the Communications Decency Act, a law that protects social media companies from liability for what their users post and is regarded as sacrosanct by the platforms. […]
But the hearing’s barbed exchanges pointed to how the debate over online speech has become increasingly divided, with the companies caught in the middle. Of the 81 questions asked by Republicans, 69 were about censorship and the political ideologies of the tech employees responsible for moderating content, according to a tally by The New York Times. Democrats asked 48 questions, mostly about regulating the spread of misinformation related to the election and the coronavirus pandemic.
More that one observer noted that the main point of the hearing seemed to be to generate clips of Republicans looking pugnacious in the face of hated Silicon Valley elites, which they could then distribute on those elites’ own platforms. (“Basically a TikTok house for politicians,” in the words of Protocol’s David Pierce.) This seemed especially true of Sen. Ted Cruz, R-TX, who had promoted the fight on Twitter with a UFC-style infographic promising, in all caps, a FREE SPEECH SHOWDOWN. And, sure enough, his timeline today includes at least 19 clips of his sparring with Twitter CEO Jack Dorsey, including one that Cruz pinned to the top of his page for long-term viewing.
In the face of so much bad-faith arguing, I could not help but feel roused when Sen. Brian Schatz (D-HI) called the hearing “a scar on this committee.” “What we are seeing today is an attempt to bully the CEOs of private companies into carrying out a hit job on a presidential candidate by making sure they push out foreign and domestic misinformation meant to influence the election,” Schatz said, and I’m tempted to just leave the whole thing there.
Except that I can’t, because the question over Section 230 and how the internet ought to be regulated is one of the most important debates facing the tech industry. (If you’re unfamiliar with the law and the many controversies around it, I wrote an in-depth explainer earlier this year.)
Among Republicans, Democrats, and tech CEOs, there is agreement that the law is showing its age, and in need of updating. (Even if each group would amend it in very different ways.) And if you sweep away all the bad-faith arguments and even worse policy proposals, you’re left with genuine questions about power and responsibility. What speech should tech platforms be allowed to host, and to amplify? When they err, what is a just response? When a citizen is terrorized by online harassment, what recourse should they have?
From these broad questions, you might derive a basic set of principles. But that’s not enough to craft policy or law. To get there, you have to start asking really nettlesome questions.
Facebook, Google, and Twitter have signaled varying degrees of support for amending Section 230. Facebook has gone the furthest, suggesting that Congress set performance targets for the speedy removal of illegal content and requiring platforms to comply with them. Google and Twitter, by contrast, have encouraged restraint, noting that the ripple effects of such a change could be broad. (As Adi Robertson notes in this thread, changes to Section 230 could require newspapers to close their comments section, or consumer complaints sites to shut down completely.)
In fact, the last time Section 230 was amended — with Facebook’s full support — the ripples were broad and destructive.
The 2018 FOSTA-SESTA law, nominally designed to curb sex trafficking, resulted in many online personals sites shutting down completely over liability fears. Its aftermath confirmed what academics had long warned: that the most predictable effect of limiting Section 230 would be to prompt platforms to over-moderate themselves, limiting speech on the internet.
FOSTA-SESTA did not come up once today’s hearing — even though, in a sane world, that’s where the hearing would have begun.
Next time around, it won’t be the personals sites that suffer from Section 230 reform — they’re already gone. Nor is it likely to be Facebook, or Google, or Twitter, all of whom have the resources to adapt to whatever changes come their way. (Twitter has the fewest resources of the three, but it uses the same centralized moderation model that its peers do.)
Instead, the victims are likely to look more like Reddit, which relies on volunteers to help moderate the site in a way that an amended Section 230 might no longer allow. “What would be super unfortunate is if we end up throwing out 230 in an effort to punish the largest internet players for their perceived or real abuse of their dominance,” Reddit’s general counsel, Benjamin Lee, told Protocol. “Unraveling 230 would basically further ensure that dominance, while undermining the ability of smaller companies like Reddit to challenge that dominance with alternative models of innovation.”
I still believe that Section 230 can be modernized in a way that makes the internet better. If Senate Republicans had their way, though, the internet would only become smaller.
Elsewhere: How the hearing was designed to help President Trump on Election Day. Twitter’s three-point plan for amending Section 230. Is Facebook throwing the open internet under the bus? And for the love of God, teach yourself how to pronounce “Pichai.”
Oh boy, I made some mistakes in yesterday’s edition! One, Google originally announced its post-election ad ban in September, not Tuesday. (I was confused by a Washington Post story yesterday on the subject.) Two, while Facebook will no longer accept new political ads before the election, it won’t allow any political ads for at least a week after the election. This should mitigate many of the concerns about Trump’s “I’m still president” ad that I covered. Finally, I should have noted that Wikipedia is edited by volunteer editors, and moves to lock certain pages around Election Day came from the community rather than the Wikimedia foundation.
Making these mistakes on the same day made me wonder what system I can devise to stay on top of both when and how platforms have changed their policies around the election, and how I might share that with you all. Let me know if you have ideas — comments are open to everyone on today’s edition!
Today in news that could change public perception of the big tech companies
⬆️ Trending up: Since officially banning militias and QAnon content, Facebook said it has removed more than 14,000 groups related to militarized movements and 5,600 groups related to QAnon. Instagram removed 18,700 QAnon accounts as well.
⬆️ Trending up: TikTok will add live election results from the Associated Press to its app. The in-app election guide should help users find credible resources in the aftermath of November 3. (Sarah Perez / TechCrunch)
⬇️ Trending down: In an internal message, Spotify executives said they would not restrict Alex Jones or any “specific individuals” from appearing on the podcasts they fund. I’ll just go ahead and predict now that this policy will eventually change as the outcry grows over various bad actors appearing on more Spotify podcasts. (Jane Lytvynenko / BuzzFeed)
⭐ Republican secretaries of state attempted to stop Facebook from conducting its successful voter registration drive, which helped 4.4 million people register. A truly appalling story of anti-democratic efforts from top state officials from Ryan Mac and Craig Silverman at BuzzFeed:
In September, Facebook received a strongly worded letter signed by the secretaries of state of Alabama, Idaho, Kentucky, Louisiana, Mississippi, and West Virginia, asking the company to discontinue its Voting Information Center. It argued election officials alone are “legally and morally responsible to our citizens” and said Facebook has “no such accountability.”
“While such goals may be laudable on their face, the reality is that the administration of elections is best left to the states,” read the letter, which was addressed to Zuckerberg. “The Voting Information Center is redundant and duplicative of what we, as chief election officials, have been doing for decades.”
Facebook’s ban on new ads before the election has blocked some approved ads by Joe Biden from running. The company attributed the issue to “data lags.” (Issie Lapowsky / Protocol)
Facebook said it removed two coordinated influence operations aimed at the United States, and a third aimed at Myanmar. The operations did not succeed in building large networks before they were caught, Facebook said.
A new report by Senate Democrats calls on tech platforms to negotiate carriage fees with local news outlets for hosting their content. US publishers have long sought an exemption to anti-collusion rules to be able to negotiate with platforms as a unit. (Keach Hagey / Wall Street Journal)
Banned from Facebook and other mainstream platforms, militia groups are increasingly turning to smaller rivals such as Zello and MeWe. They’re stoking fear about post-election unrest and, in some cases, groups say they are organizing in-person training exercises. (Sarah Emerson / OneZero)
A group of volunteer moderators are scouring WhatsApp for misinformation about Joe Biden as misinformation targeted at the Indian community in the United States spreads on the app. They have crafted more than 50 response texts and infographics to push back on hoaxes. (Paresh Dave / Reuters)
In the campaign’s closing days, there’s a surge of misinformation in text messages and email. Texts and emails are both likely to be seen as more credible than social media posts and subject to less scrutiny, making them an attractive channel for bad actors. (Isaac Stanley-Becker and Tony Romm / Washington Post)
Amateur pollsters and election watchers plan will challenge the primacy of traditional sources on Election Day. “‘There can be kind of a rat race to do it first because you'll get retweets,’ said Brent Peabody, 24, an enthusiastic participant in Twitter's election community.” Gulp! (David Ingram / NBC)
A national poll of likely voters found that almost half support the Department of Justice’s antitrust lawsuit against Google. In the survey, 52 percent of Republicans supported the suit, compared to 49 percent of Democrats. (Ashley Gold / Axios)
Italy’s antitrust watchdog will investigate Google’s dominance of display advertising in the country. The watchdog said Google may be using its access to vast stores of data to prevent competitors from challenging it. (Reuters)
President Trump’s campaign website was briefly hacked by cryptocurrency scammers. Demonstrating once again that Twitter isn’t the only site vulnerable to crypto hacks. (Devin Coldewey / TechCrunch)
A profile of Vijaya Gadde, Twitter’s head of legal, policy, and trust. Gadde led the push inside Twitter to end the sale of political ads, and has pushed the company to take a harder line on speech issues. (Nancy Scola / Politico)
Hate groups have access to a wide variety of funding mechanisms, including tools from companies with policies against hate speech. PayPal, Facebook, and Stripe were all found to be supporting hate groups in a recent analysis by a nonprofit research group. (Olivia Solon / NBC News)
How political campaigns get your phone number and other personal data. It’s a mix of state voter files, commercial data brokers, other campaigns, and information you volunteer. (Geoffrey Fowler / Washington Post)
Stringent speech controls on mass media in Nigeria have made social networks a vital avenue for free expression during recent protests of police violence. The #EndSARS hashtag has drawn attention to alleged extortion, kidnappings, and extrajudicial killings. (Tolu Olasoji / Vice)
Tech startups are divided on how much political discussion to allow in the workplace. Coinbase is on the “not very much” side of the spectrum; Expensify is on the “email 10 million customers directly and tell them to vote for Joe Biden” side of the spectrum. Erin Griffith has the story in the New York Times:
“I have never seen another instance like this in my career,” said Bradley Tusk, a venture capitalist and political consultant. “There’s no real separation anymore, in the current political climate, between politics and everything else. It has permeated absolutely everything.”
Members of India’s lower castes say they have faced discrimination at Silicon Valley workplaces. American companies often lack understanding of caste bias and have failed to explicitly ban caste-based discrimination, workers say. (Nitasha Tiku / Washington Post)
Apple is reportedly stepping up its efforts to build a rival search engine. A report in the Financial Times said the effort has intensified as Apple faces more questions from regulators about its lucrative deal to make Google the default search engine on iOS. (Sam Shead / CNBC)
Amazon’s Audible added 100,000 podcasts to its app to more directly challenge Spotify. A logical move, and yet it still doesn’t seem like an entirely good fit to me. Your thoughts? (Ashley Carman / The Verge)
Airbnb is facing more questions over the use of the platform to rent “party houses” around the world. Former employees say the issue was largely ignored until a deadly shooting at an Airbnb party house last Halloween. (Erin Griffith / New York Times)
Foursquare introduced Marsbot for AirPods, an audio-only “proactive walking assistant” that will whisper local recommendations to you. Feels like someone is going to crack this use case sooner or later, and could stand to make a lot of money when they do. (Foursquare)
TikTok and Shopify are teaming up to bring new shopping features to the app. Merchants can now launch and manage ad TikTok campaigns from the Shopify dashboard. (Sarah Perez / TechCrunch)
Things to do
Stuff to occupy you online during quarantine
Check out Not For You, an “automated confusion system” that engages with TikTok randomly in an effort to confuse its recommendation algorithms. Another art project from Ben Grosser, who you may remember from Demetricator and other critiques of social media.
Those good tweets
Sorry for including my own tweet, but.
Talk to me
Send me tips, comments, questions, and scurrilous arguments about censorship: firstname.lastname@example.org.