What a big study of teens says about social media — and what it can’t
The “moral panic” framing misses how platforms actually harm kids. PLUS: Newsom investigates TikTok over Trump, and the Clawdbot frenzy
I.
A new study on how social media affects teens' mental health has added new fuel to the debate over whether countries should ban children under 16 from using those services. But much of the public discussion so far has exaggerated the significance of the researchers’ findings while minimizing the ongoing product safety risks inherent in Instagram, TikTok, Snapchat and other products.
Today, let’s take a look at what that study actually said — and why, despite some thoughtful design choices and important findings, it leaves the actual question of what to do about social platforms still unresolved.
On Jan. 14, the Guardian posted a story with a provocative headline: “Social media time does not increase teenagers’ mental health problems – study.” It referred to a paper published last month in the Journal of Public Health by researchers at the University of Manchester.
Once a year from 2021 to 2023, the researchers surveyed 25,600 British children in years 8 through 10 — when they were roughly 12 to 15 years old. Each autumn, participants reported how many hours on a typical weekday they spent on social media and how often they played video games. Then they answered a 10-item questionnaire about emotional difficulties including worry, sadness, and loneliness. The researchers then tested whether changes in a teen's social media use or gaming predicted changes in their emotional difficulties the following year.
Here’s Anna Bawden at the Guardian:
The study found no evidence for boys or girls that heavier social media use or more frequent gaming increased teenagers’ symptoms of anxiety or depression over the following year.
Increases in girls’ and boys’ social media use from year 8 to year 9 and from year 9 to year 10 had zero detrimental impact on their mental health the following year, the authors found. More time spent gaming also had a zero negative effect on pupils’ mental health.
“We know families are worried, but our results do not support the idea that simply spending time on social media or gaming leads to mental health problems — the story is far more complex than that,” said the lead author Dr Qiqi Cheng.
Bawden concluded that “the findings challenge concerns that long periods spent gaming or scrolling TikTok or Instagram are driving an increase in teenagers’ depression, anxiety and other mental health conditions.”
To critics of social media bans, the study confirmed what they have long suspected: that Instagram, TikTok, et al. have been caught up in a moral panic that may ultimately harm children more than it helps them.
The real story, I think, is more complicated.
II.
Some credit where it’s due: the Manchester study was well designed and rigorously executed. Its sample of children was large and diverse — about a third of participants came from underrepresented ethnic backgrounds, and nearly 30 percent qualified for free school meals. The longitudinal design tracked the same kids over time, rather than comparing different groups at a single moment.
And the statistical approach, called a random-intercept cross-lagged panel model, addresses a weakness in earlier research: the tendency to confuse "kids who use social media more tend to be more anxious" with "social media makes kids more anxious." This study instead asked: When this specific kid increases their social media use relative to their own typical level, do they then show increased anxiety relative to their own typical level the following year?
But as with any research, this study has some important limitations — ones that the researchers themselves acknowledge.
Most important is the 12-month gap between measurements. Many harms from social media may not be detectable in a survey that a teenager takes once a year. Maybe they spent the spring spiraling over negative social comparison, but unfollowed a bunch of accounts and felt better by the fall. Maybe they lost a lot of sleep checking their phone in the spring but recovered when their parents started storing their phone in their bedroom overnight.
As the researchers write, their design "does not rule out the possibility of negative effects of social media or gaming in the shorter-term." Self-reported screen time can be unreliable; studies comparing it to actual usage logs find teens often underestimate their actual use.
While the study did distinguish between “active” and “passive” use of social media, and found no significant changes among those who did more mindless scrolling, the distinction may not be enough to capture the experiences we generally worry about. Are the teens seeing fun viral dances or pro-anorexia content? Are they chatting with friends, or with adult strangers that Instagram had recommended that they follow?
Finally, the questionnaire that students took allowed them to report symptoms of mental health issues, but did not connect them with an actual diagnosis. So students may have reported that they “worry a lot,” or “feel unhappy.” But this is not the approach you would use to diagnose population-level changes in anxiety or depression.
The researchers are admirably straightforward about what their study did. It tested a narrow hypothesis: Does total self-reported time on social media predict emotional symptoms one year later?
The answer appears to be no, or at least not in ways we can detect. That's an important finding, and one that replicates earlier studies showing weak effects from social media in general on teens. (One famous 2019 study of more than 350,000 adolescents in the United States and United Kingdom found that digital technology use explained just 0.4 percent of the variation in participants’ well-being—about the same as eating potatoes.)
But “are there measurable population-level effects on teens’ self-reported mental health state” and “is social media safe for teens” are related but separate questions. And lately, the two sides have been talking past each other.
III.
This month Jonathan Haidt, whose 2024 book The Anxious Generation on the dangers of social media became a runaway bestseller, published a chapter in the 2026 World Happiness Report titled “Social Media Is Harming Young People at a Scale Large Enough to Cause Changes at the Population Level.” The paper, written with Zach Rausch, acknowledges that evidence the spread of social media in the 2010s is a primary cause of adolescent mental health problems is mixed. Correlation does not prove causation, and as the Cheng study above shows, effect sizes in randomized controlled trials are often weak or nonexistent.
“The fact that heavy users of social media are more depressed than light users doesn’t prove that social media caused the depression,” they write. “Perhaps depressed people are more lonely, so they rely on Instagram more for social contact? Or perhaps there’s some third variable (such as neglectful parenting) that causes both?”
Whatever the case, they write, this research fails to reckon with arguably the most important data at all: the evidence of direct harm to young people from social media that extends well beyond mental concerns. They wrote about it in an accompanying blog post:
There is now a lot more work revealing a wide range of direct harms caused by social media that extends beyond mental health (e.g., cyberbullying, sextortion, and exposure to algorithmically amplified content promoting suicide, eating-disorders, and self-harm). These direct harms are not correlations; they are harms reported by millions of young people each year.
The paper cites an internal research project Meta conducted in 2020 into the effects of deactivating Facebook. “People who stopped using Facebook for a week reported lower feelings of depression, anxiety, loneliness and social comparison,” a report on the research found, according to court documents that are part of a lawsuit alleging that social media companies hid the risks of their products from users.
Among the other allegations in the court documents, according to Reuters: that “Meta required users to be caught 17 times attempting to traffic people for sex before it would remove them from its platform,” and “Meta recognized that optimizing its products to increase teen engagement resulted in serving them more harmful content, but did so anyway.”
Meta disputed the allegations, telling Reuters that the company’s teen safety measures are effective and that the documents “rely on cherry-picked quotes.”
To Haidt and Rausch, though, the preponderance of evidence suggests that there are obvious and persistent safety issues on social platforms — and that the platforms have addressed them slowly, begrudgingly, and often to the bare minimum standard required by law.
Moreover, certain effects that appear small at the level of the entire population look much larger when you focus on specific groups: adolescent girls with body-image issues, adolescent boys who fall victim to sextortion, kids with existing mental health issues, and victims of cyberbullying.
I understand the desire to reduce the entire subject to a binary: is social media safe for my kid or not? But the only truthful answer to that is the conclusion so much good research comes to: it depends.
IV.
Unfortunately, I found that level of nuance missing in a piece two of the researchers of the Manchester study wrote in The Conversation. Qiqi Cheng and Neil Humphrey write that their findings suggest that social media bans like the one Australia recently implemented and the UK is now considering will be ineffective:
Our findings suggest that limiting the hours spent on consoles and apps or measures such as banning social media for under 16s is unlikely to have an effect on teenagers’ mental health in the long term. Policymakers should take note. Worse, such blanket bans may obscure the real risk factors by offering a simple solution to a complex problem.
Instead, it’s important to look at the broader context of a young person’s life, including the factors that may lead to both increased digital technology use and internalising symptoms. If a teenager is struggling, technology use is rarely the sole culprit. By moving away from the predominant “digital harm” narrative, we can focus on the real, complex factors that drive adolescent wellbeing.
Here the researchers extend their conclusions beyond what their data can support. On one hand, I believe them when they suggest that banning social media for under-16s will not instantly improve the median teen’s mental health. On the other, though, blanket bans do offer a simple solution to any number of ongoing problems on these platforms: the ease with which they connect predators to children; addictive mechanics like “streaks” and notifications that roil classrooms and wreck sleep; predictive algorithms that introduce young girls to disordered eating and related harms; and the unsettled feeling that comes from staring way too long at a feed you had only intended to look at for a minute.
The trouble with 30,000-foot-level inquiries into “the effects of social media” is that their measurements are too crude, and their inquiries too vague, to capture the intensity of social media harms among the minority of teens who experience them.
I have said before, and still believe, that Australia-style bans have real costs: to minorities who might have otherwise found connections online, to young creators who might have built businesses there, and to all the teens who manage to use these platforms more or less unscathed and might have enjoyed expressing themselves there.
But several studies of the most crude blanket ban of all — getting smartphones out of schools — show that it improves academic performance, particularly among lower-achieving students. I expect that taking similarly harsh measures on teens’ access to social media will produce similarly positive results.
The next time you go to Las Vegas, you’ll notice that there are no 13-year-olds in the casinos. The reason is not because a series of longitudinal studies proved to the satisfaction of the gaming industry that gambling causes anxiety and depression. Rather, there are no 13-year-olds in casinos because we know that the environment is designed to exploit them.
Research like the Manchester study can offer important insights into narrow questions. But it would be a mistake to rely on it to set policy. For that you need the full picture, and the full picture of children’s experience on social media over the past decade is damning.

Sponsored
Three weeks in. How’s the doom scrolling going?

It's easy to slip back into the same old cycle: mindlessly consume content, remember nothing, feel overwhelmed by information you don't even care about.
Recall helps you break the cycle - turning mindless consumption into useful knowledge.
- Save hours: AI summaries of podcasts, YouTube videos, and PDFs let you preview content before committing your time.
- Never lose another insight: Unlimited storage and instant search across everything you've saved. No more "I know I read something about this..."
- Insights no other AI can give you: Chat with YOUR content to create frameworks uniquely yours. "Create a productivity schedule based on @Tim Ferriss, and my journals."
Swap the doom scroll for content that sticks.
Try Recall free or get 30% off Premium with code PLATFORMER30 (valid through March 1st)

Following
Gavin Newsom investigates TikTok over “censorship”
What happened: California Gov. Gavin Newsom launched an investigation into whether TikTok is violating state law by "censoring" content critical of President Trump.
Newsom’s office has “received reports” and “independently confirmed instances” of suppressed content that were critical of Trump following TikTok’s sale, Newsom’s press office posted on X. His office told Politico the independent confirmation involved sending a DM containing the word “Epstein,” which resulted in a warning from TikTok that the message was not sent because it may violate community guidelines.
The inability to send messages with the word “Epstein” is a bit of a head-scratcher. “We don't have rules against sharing the name ‘Epstein’ in direct messages and are investigating why some users are experiencing issues,” a TikTok US spokesperson told NPR. Tests by NPR and other accounts showed that the problem was happening inconsistently.
Why we’re following: Newsom is using the Republican playbook against the top Republican. Conservatives have long accused TikTok and other social platforms of bias and censoring them; Newsom seems to believe that turnabout is fair play.
The review comes amid numerous claims that TikTok started suppressing anti-ICE content after TikTok was spun out of ByteDance into a new entity with several Trump supporters as investors, notably Oracle billionaire Larry Ellison. TikTok says it has not made any changes to its content moderation or recommendation systems since the sale.
Newsom got the headlines he wanted, but set a bad precedent. It's weak when Republicans lob unsubstantiated claims of censorship every time they don't get what they want out of a social platform, and it's weak when Democrats do it, too.
What people are saying: It’s “inaccurate to report that this is anything but the technical issues we’ve transparently confirmed,” a TikTok spokesperson told the New York Times.
“(TikTok) was 100% fucked up yesterday. All the (ICE) posts were gone but so was everything else I normally got. Instead I mostly got videos about how great corvettes were,” Business Insider chief correspondent Peter Kafka wrote on X. “If it was a censorship plot it wasn't executed well.”
—Lindsey Choo
AI agent Clawdbot takes vibe coders by storm, infringes copyright
What happened: After the breakout success of Claude Code this year, vibe coders are hungry for more. So developer Peter Steinberger made Clawdbot, an open-source AI agent that runs locally on your computer. The agent can send emails, spin up new agents, and even make updates to itself on your behalf.
Clawdbot was initially named after “Clawd”, the pixelated lobster that serves as official mascot for Anthropic’s Claude Code. Predictably, Anthropic asked Steinberger to change the name. Steinberger’s lobster agent is now named Moltbot, an apparent reference to the way lobsters shed their shells.
The agent formerly known as Clawdbot has become popular on X, with people posting about how it’s writing emails and booking restaurants for them. People are even getting Mac Minis to run their Clawdbots 24/7.
Why we’re following: The newly-christened Moltbot is a small step towards the dream of the AI personal assistant: a little genie in your laptop that does whatever you want. Right now, Moltbot runs on your computer, making it more customizable that Claude Code, and connects to messaging apps including Telegram, where you can message it requests from your phone.
Users have been having a field day automating weird stuff with Moltbot, and discussing it all in the Moltbot Discord, named “Friends of the Crustacean.”
Moltbot is also, as with many AI agents, risky to let run free on your various Internet accounts. Platformer encourages caution in your vibe coding.
What people are saying: In an X article, a16z partner Olivia Moore wrote, “Clawdbot is amazing — and, I don't think consumers should use it.” Moore was impressed with a daily news and X briefing she set up using Clawdbot, but thought it was too technically demanding — and too unsafe — for everyday users.
Y Combinator CEO Garry Tan replied: “The future is already here, just not evenly distributed.”
MacStories’ editor in chief Federico Viticci was impressed impressed with its integrations and self-improvement abilities — although he doesn't think it is quite ready for consumers: “Clawdbot is a boutique, nerdy project right now.”
But it raised a big question for him: “When the major consumer LLMs become smart and intuitive enough” to do all the computer stuff for you, then, “what will become of 'apps' created by professional developers?” Good question.
Entrepreneur Chris Bakke was more skeptical. On X, he joked that he was “Watching my friends spend $1500 and 30 hours of their time setting up an AI chatbot that summarizes the weather.” Platformer would agree this does not exactly look like the coming of AGI.
—Ella Markianos

Side Quests
The FBI is investigating Signal group chats that track ICE movements in Minnesota. OpenAI CEO Sam Altman told employees that “what’s happening with ICE has gone too far.”
Meta is blocking links to an ICE tracking list on its platforms. Surely the House Republicans who investigated Meta's temporary blocking of the Hunter Biden laptop story will investigate this one, too?
Meta CEO Mark Zuckerberg authorized employees to let minors access chatbot companions that staffers said were capable of having sexual interactions, legal filings show.
A profile of Trump AI adviser Sriram Krishnan.
Dozens of nudify apps are on Google and Apple's platforms, the Tech Transparency Project found.
A conversation with Kevin Weil, head of OpenAI’s new in-house science team on the company’s push into science. This week the science team debuted Prism, a free ChatGPT-powered tool meant to make writing scientific papers easier.
AI companies including Anthropic and Meta sought to scan books in bulk without authors’ knowledge for their AI systems, court filings show.
Anthropic is reportedly set to double its funding round to $20 billion.
The UK will build AI tools to upgrade public services with funding from Meta. Meta plans to test premium subscriptions on Instagram, Facebook and WhatsApp. The tech giant will pay Corning up to $6 billion through 2030 for fiber-optic cables in its data centers. WhatsApp is now offering an advanced security mode.
TikTok settled a lawsuit over claims that it engineered its products to get users addicted. Top TikToker Khaby Lame signed a $975 million, three-year exclusive rights deal with financial services firm Rich Sparkle Holdings, which plans to create an AI avatar of him.
The EU ordered Google to open Android to rival AI assistants and give data to other search engine providers. You can now ask AI Overviews follow-up questions. Android is getting an expanded set of anti-theft protection features. Google’s cheaper AI Plus subscription is now available in the US.
Amazon agreed to pay $309 million to settle a class action over alleged incorrect refund denials. All Amazon Go and Amazon Fresh locations will be permanently closed.
A group of YouTubers suing tech companies for allegedly scraping their videos to train AI is also suing Snap.
France will replace Microsoft Teams and Zoom with domestically developed videoconferencing platform Visio by 2027.
Pornhub parent Aylo and other adult sites said they will restrict UK visitors on the sites starting Feb. 2.
China’s Moonshot AI released an upgrade of its flagship model Kimi.
Pinterest will lay off 15% of its workforce and suggested it would use AI to do their work. The stock plummeted anyway.
Yahoo launched Scout, an AI answer engine powered by Claude.
A look at whether AI can actually make the judicial system better.

Those good posts
For more good posts every day, follow Casey’s Instagram stories.

(Link)

(Link)

(Link)

Talk to us
Send us tips, comments, questions, and social media studies: casey@platformer.news. Read our ethics policy here.