The infinite scroll goes on trial

Testifying before a jury in LA, Mark Zuckerberg makes the case that platform design is about free expression. But the walls are closing in on Section 230

The infinite scroll goes on trial
(Shutterstock)

Mark Zuckerberg’s time on the witness stand in Los Angeles on Wednesday began with a reprimand. Members of the Meta CEO’s team had entered the courtroom wearing Ray-Ban Meta AI glasses, which can take photos and record video.

A reporter who was present told CBS News that the judge “upbraided” the Meta employees and told them to delete any video they may have recorded. Meta declined to comment. But there was something fitting about the way that a referendum on platform power began with one company testing a long-established courtroom boundary. It’s arguably the same instinct that has made social platforms so profitable over the past decade: find the line, then push against it relentlessly.

The judge’s admonition to Meta’s team, which included a threat to hold its employees in contempt, marked the most dramatic moment in the day’s testimony. Over the next eight hours, as Zuckerberg answered questions from the plaintiff’s attorney about underage Instagram users and his decision to permit beauty filters in the app, there was occasional sparring but few revelations. By now Zuckerberg is well practiced at the art of answering hostile questions, and for a full day of testimony he remained obstinately on script. 

Unlike past appearances before Congress, though, Zuckerberg’s performance on Wednesday arguably had much higher stakes. The trial now underway in Los Angeles County Superior Court is the first of more than 1,600 social media addiction cases to go to trial. At stake is whether companies like Meta can be held accountable for harms that their users experience due to the design of the platforms. It’s a novel and potent challenge to Section 230 of the Communications Decency Act, which for decades has shielded platforms from liability for experiences people have on their platforms. And if the jury agrees with the plaintiffs, it could force the most significant changes to social app design to ever come out of a courtroom.

The plaintiff in the LA case is a 20-year-old California woman identified only as KGM, who says she became hooked on YouTube at age 6 and Instagram by age 9. Over time, she developed symptoms of depression and had suicidal thoughts that she attributes to her compulsive use of the platforms. KGM’s lawyers have likened the platforms to a "digital casino," offering visitors irregular dopamine hits via infinite scroll, autoplay videos, beauty filters, and algorithmic recommendations, among other slot machine-like features.

Notably, TikTok and Snap settled with KGM before the trial began. Meta and Google did not. YouTube CEO Neal Mohan was scheduled to testify today; he was removed from the witness list on Thursday morning after KGM attorney Mark Lanier said he was running out of the time allotted by the court to make his case. Cristos Goodrow, YouTube’s vice president of engineering, will testify on Monday.

Before the trial began, I noted here that design-based critiques of social platforms are on the rise. With content moderation under attack by President Trump and his allies, and Section 230 an effective shield against content-based litigation against social platforms, lawyers and advocates have begun to focus on the way app design can enable harm. Rarely does any individual piece of content lead to catastrophe. But push notifications interrupt sleep, beauty filters lead to body dysmorphia, and infinite scroll leads to problematic use.

Social platforms adopt these features because they increase the amount of time that people spend using them, and time spent correlates directly with revenue earned. And so much of Zuckerberg’s time on Wednesday was spent insisting that Meta no longer emphasizes time spent on its platforms as a core objective. 

“I’m focused on building a community that’s sustainable,” Zuckerberg said, according to Hannah Murphy in the Financial Times. Rather than time spent, he said, Instagram now focuses on “utility” and “value.” (If these read to you like different names for the same thing, you’re not alone.)

Zuckerberg was also asked to defend a decision to overrule concerns from a panel of 18 outside experts and his own staff to lift a ban on beauty filters on Instagram. 

“Zuckerberg told the court there was a ‘high bar’ for demonstrating harm,” Murphy writes, “calling the restrictions ‘paternalistic’ and ‘overbearing,’ adding he ‘wanted to err on the side of people being able to express themselves’ in making the decision.”

The company’s own research had shown that the filters could promote body dysmorphia in teens, according to documents cited in the case. Other documents found that parental controls did little to stem compulsive social media use in children, and that “kids who experienced stressful life events were more likely to lack the ability to moderate their social media use appropriately,” Sarah Perez reported in TechCrunch.

Read in conjunction with the unredacted state attorneys general lawsuit against the company, which I wrote about here in 2023, there’s a recurring pattern: staff members research potential platform harms and urge caution to their superiors, who ultimately overrule them.

And now several different legal and regulatory paths are converging on the same target. California's social media addiction act seeks to limit recommendation algorithms from the legislative side. The 41-state AG lawsuit attacks them through consumer protection and privacy law. The jury trial that Zuckerberg testified at this week challenges them as defective products from the liability side.

Even Meta's scam advertising problem has prompted judges to question whether platforms' own terms of service create enforceable duties that Section 230 can't shield them from.  

All of these efforts are aimed at the same target: platform design choices that harm users on a huge scale, with everyone involved insisting that it’s someone else’s problem. 

Before he left the courtroom Wednesday, Zuckerberg said he wished Meta "could have gotten there sooner" on age verification. By the time the assault on Section 230 is finished, I suspect that will not be his only regret.

Sponsored

You'll forget 80% of this within 24 hours.

It's called the forgetting curve, and it's why your bookmarks, saved articles, and watch-later playlists never turn into actual knowledge. We all follow the typical cycle - we consume, we forget and we move on.

The fix? Spaced repetition. Reviewing material at timed intervals to lock it into long-term memory. But obviously, no one actually does that. 

Recall's Quiz 2.0 does it for you. Save any article, podcast, video, or PDF, and Recall generates personalized quizzes timed to when you're about to forget.

Think you remember Casey's 2026 predictions? We turned his December article into a Recall quiz. 

Take the challenge and see how much you actually retained. 

Join 600,000+ people who have stopped just consuming content and have started actually retaining it.

Try Recall free or get 30% off Premium with code PLATFORMER30. Offer valid until April 1, 2026.

On the podcast this week: Kevin catches me up on the Pentagon vs. Anthropic. Then — as listeners demanded — we bring on Scott Shambaugh to discuss the AI agent that wrote a hit piece about him. And finally, the Hot Mess Express returns.

Apple | Spotify | Stitcher | Amazon | Google | YouTube

Following

Study shows the X algorithm changes political views

What happened: Those of us who have used X since Elon Musk’s takeover may have noticed that the “For you” tab tends to promote conservative content. Those suspicions are now backed by high-quality data. A Nature study following thousands of participants over 7 weeks in 2023 found that the X algorithm promotes conservative posts. Even more significantly, it leads users to political views more in line with those of the Republican Party.

The authors randomly assigned people to scroll on “For you” or a chronological feed. They found that "For you" promotes political content with a conservative bias: a conservative post was 19.9% more likely to be shown in the ranked feed, while a liberal one was only 3.1% more likely to be shown.

The algorithm also demotes traditional news and promotes activists, with posts from traditional media outlets appearing 58.1% less in the algorithmic feed.

After the study was over, people who’d been scrolling “For you” had changed views on current political issues: they were more likely to be against criminal investigations of Trump, and to support Russia’s invasion of Ukraine. The study found users “were 4.7 percentage points more likely to prioritize policy issues considered important by Republicans, such as inflation, immigration and crime.”

The effects were limited to views on specific issues; self-reported measures of partisanship remained unchanged. But researchers expects that the effects of the algorithm will persist, because people using it started following more conservative political activists — who will remain in their feeds even if they switched back to the chronological one.

Why we’re following: Some people remain skeptical that algorithms can influence political behavior, especially because previous research on Facebook and Instagram showed little effect from post ranking. This study offers compelling evidence that content moderation choices can affect people’s political views in aggregate.

It’s interesting that X shows such huge disparities in the political bent of recommended content. Conservatives used to howl that platforms like Twitter should remain "neutral" in what they promote; now X is openly promoting their side and they've all gone silent.

Besides the implications for democracy, it reminds us that we too have the capacity to be affected by what content we see on the Internet. Let this be a catalyst for our readers to free themselves from the shackles of the algorithm, and embrace the pleasing inefficiency of the chronological feed.

What people are saying: The study is currently on the front page of r/NoShitSherlock, which describes itself as a subreddit for “things that make you go, “no shit, sherlock.””

Ben Werdmuller, ProPublica’s senior director of technology, said the findings on changing political views are significant, even if X’s biased algorithm was unsurprising. Commenting on the figures related to opinion change, he said: “If that number seems small to you, consider that 4.7% is more than enough to swing an election.”

On X, study coauthor Philine Widmer emphasized “the twist: switching the algorithm OFF did not reverse the effects.” In other words, “The effects of algorithm exposure are sticky.”

Ella Markianos

Side Quests

Donald Trump’s anti-regulation AI policies are fueling grassroots rebellion among his conservative base. The Trump administration is recruiting tech billionaires to train an “elite” cadre of roughly 1,000 software engineers.

Funding for Internet Freedom, a US state department program that funded global anti-censorship efforts, has been “effectively gutted.” But the U.S. State Department is developing an online portal for people in Europe and elsewhere for people to see content banned by their governments.

The West Virgina attorney general sued Apple, alleging the company knowingly allowed iCloud sharing of CSAM. 

Survey data shows AI adoption increases labor productivity levels by 4% on average in the European Union, with no evidence of reduced employment in the short run.

Amazon dethroned Walmart as the world’s biggest company by revenue.

Google announced plans to build new fiber-optic routes between the US and India. Music generator Lyria 3 got added to the Gemini app. Google announced Gemini 3.1 Pro, an update for “complex problem-solving.”

A US court ruled in favor of Cameo, barring OpenAI from using “Cameo” in Sora and other products. OpenClaw founder Peter Steinberger chose OpenAI over Meta, despite Mark Zuckerberg offering a higher salary. OpenAI announced partnerships with six major Indian higher-education institutions. OpenAI poached Charles Porch, the Instagram VP known as Meta’s celebrity whisperer. OpenAI introduced EVMbench, which evaluates AI agents’ ability to work with smart contracts. OpenAI is on track to top $100 billlion in its latest funding round.

Snapchat’s annualized direct revenue reached $1 billion, with Snapchat+ reaching 25 million paid subscribers.

Bytedance is hiring for 100 roles in an expansion of its US AI team.

A look at the mixed impact AI coding tools have had on open-source projects.

Meta is preparing to spend $65 million to promote state politicians that are friendly to the AI industry, starting this week in Illinois and Texas. The company also plans to release its first smart watch in 2026. Facebook’s new content monetization program has grown from under 3 million to 12 million participants in just over a year. Meta’s VR metaverse is switching away from VR to become a mobile-focused platform.

The Russian military’s front-line communications have been hampered by a block from Starlink and its own government’s ban on using Telegram.

AI pioneer Fei-Fei Li’s world models startup, World Labs, raised $1 billion.

The co-founders of SaaS firm Atlassian have lost $7.2 billion, a third of their net worth, in the AI-related software stock decline.

Warner Bros. accused Bytedance of committing “blatant infringement” with its Seedance 2 video generator.

Inside India’s AI Impact Summit, with showing from major tech and business leaders, 250k visitors, and billions in investment. (An awkward conference photo op where Sam Altman and Dario Amodei “refused to clasp each other's hands” is going viral online. The ChatGPT account on X tried to ... fix it.)

A federal appeals court rejected Kalshi’s request for a stay, marking a setback in its legal fight to remain available in Nevada.

UK prime minister Keir Starmer said that if tech companies don’t remove deepfake nudes and “revenge porn” within 48 hours, they risk being blocked in the UK.

A startup called Germ Network integrates end-to-end encrypted messaging with the Bluesky app.

Apple Podcasts’ video update marks an industry shift to video-focused podcasting.

Consulting firm Accenture is pressuring top-level staff to use AI by tracking data on weekly log-ins.

Tesla stopped using the term “autopilot” to market its electric vehicles in California in order to dodge a 30-day suspension by the DMV.

Perplexity stopped doing ads, warning they could erode users’ trust.

Pinterest’s profusion of AI images and bad AI-driven content moderation is alienating users.

Reddit is testing an AI search feature that shows advertisers’ products based on community recommendations.

Microsoft said it’s on pace to invest $50 billion in bringing AI to the Global South.

Those good posts

For more good posts every day, follow Casey’s Instagram stories.

(Link)

(Link)

(Link)

Talk to us

Send us tips, comments, questions, and your beauty-filter selfies: casey@platformer.news. Read our ethics policy here.