Instagram tackles its child safety problem

Why its peers should follow suit — and go further

Instagram tackles its child safety problem
(Alexander Shatov / Unsplash)

In May, I wrote here that the child safety problem on tech platforms is worse than we knew. A disturbing study from the nonprofit organization Thorn found that the majority of American children were using apps years before they are supposed to be — and fully a quarter of them said they have had sexually explicit interactions with adults. That put the onus on platforms to do a better job in both identifying child users of their services and to protect them from the abuse they might find there.

Today, Instagram made some promising moves in that direction. The company said that it would:

  • Make accounts private by default for children 16 and younger.
  • Hide teens’ accounts from adults who have engaged in suspicious behavior, such as being repeatedly blocked by other children.
  • Prevent advertisers from targeting children with interest-based ads. (There was evidence that ads for smoking, weight loss and gambling were all being shown to teens.)
  • Develop AI tools to prevent underage users from signing up, remove existing accounts of kids under 13, and create new age verification methods.

The company also reiterated its plan to build a kids’ version of Instagram, which has drawn condemnations from … a lot of people.

Clearly, some of this falls into “wait, they weren’t doing that already?” territory. And Instagram’s hand has arguably been forced by growing scrutiny of how kids are bullied on the app, particularly in the United Kingdom. But as the Thorn report showed, most platforms have done very little to identify or remove underage users — it’s technically difficult work, and you get the sense that some platforms feel like they’re better off not knowing.

So kudos to Instagram for taking the challenge seriously, and building systems to address it. Here’s Olivia Solon at NBC News talking to Instagram’s head of public policy, Karina Newton (no relation), on what the company is building:

"Understanding people's age on the internet is a complex challenge," Newton said. "Collecting people's ID is not the answer to the problem as it's not a fair, equitable solution. Access depends greatly on where you live and how old you are. And people don't necessarily want to give their IDs to internet services."

Newton said Instagram was using artificial intelligence to better understand age by looking for text-based signals, such as comments about users' birthday. The technology doesn't try to determine age by analyzing people's faces in photos, she said.

At the same time, it’s still embarrassingly easy for reporters to identify safety issues on the platform with a handful of simple searches. Here’s Jeff Horwitz today in the Wall Street Journal:

A weekend review by the Wall Street Journal of Instagram’s current AI-driven recommendation and enforcement systems highlighted the challenges that its automated approach faces. Prompted with the hashtag #preteen, Instagram was recommending posts tagged #preteenmodel and #preteenfeet, both of which featured sometimes graphic comments from what appeared to be adult male users on pictures featuring young girls.

Instagram removed both of the latter hashtags from its search feature following queries from the Journal and said the inappropriate comments show why it has begun seeking to block suspicious adult accounts from interacting with minors.

Problematic hashtags aside, the most important thing Instagram is doing for child safety is to stop pretending that kids don’t use their service. At too many services, that view is still the default — and it has created blind spots that both children and predators can too easily navigate. Instagram has now identified some of these, and publicly committed to eliminating them. I’d love to see other platforms follow suit here — and if they don’t, they should be prepared to explain why.

Of course, I’d also like to see Instagram do more. If the first step for platforms is acknowledging they have underage users, the second step is to build additional protections for them — ones that go beyond their physical and emotional safety. Studies have shown, for example, that teenagers are more credulous and likely to believe false stories than adults, and they may be more likely to spread misinformation. (This could explain why TikTok has become a popular home for conspiracy theories.)

Assuming that’s the case, a platform that was truly safe for young people would also invest in the health of its information environment. As a bonus, a healthier information environment would be better for adults and our democracy, too.

“When you build for the weakest link, or you build for the most vulnerable, you improve what you’re building for every single person,” Julie Cordua, Thorn’s CEO, told me in May. By acknowledging reality — and building for the weakest link — Instagram is setting a good example for its peers.

Here’s hoping they follow suit — and go further.


Governing

Rising sea levels threaten shoreline headquarters owned by Facebook, Google and other tech companies. There are growing conflicts over who should pay to protect the campuses — the tech giants or the governments. (NPR)

Tencent suspended new user registration for WeChat to upgrade its security compliance. The latest ripple of China’s crackdown on consumer tech. (Reuters)

An interview with Erin Saltman, director of programming for the Global Internet Forum to Counter Terrorism, on the challenges of expanding its work to include domestic terrorist organizations. “One of the immediate things that ends up happening is a question of: Are you over-censoring? So that's why, when we're incrementally building out, we're tying it to overt real-world harm, overt ways that violent extremism manifests. That's not wishy-washy.” (Issie Lapowsky / Protocol)

A look at the prevalence of non-disclosure agreements in Silicon Valley, which often prohibit workers from talking about abuses suffered on the job. “All the separation agreements reviewed by Insider also include non-disparagement clauses, many of which are so broad that employment lawyers say they could limit the employee from saying virtually anything about the company.” (Matt Drange / Insider)

Facebook partnered with the nonprofit Meedan to provide expert training to its 80 fact-checking partners on the subject of health misinformation. “Moving forward, the partnership will give Facebook's fact-checking partners access to Meedan’s health experts whenever they need immediate help with health-related fact-checks.” (Sara Fischer / Axios)


Industry

⭐ Big day for earnings! Everyone is making absolutely insane piles of money.

Facebook paused sales of the Oculus Quest 2 after several reported cases of skin irritation to the included foam faceplate. The company is issuing free replacement face covers to anyone who wants one. Here’s Scott Stein at CNET:

In a Facebook post on the issue from earlier this year, the company says a small percentage of Quest 2 owners have reported the issue. But in some cases reported online, the issue has been bad enough to cause people's faces to puff up and their eyes to close. Facebook changed the manufacturing process of its foam face interfaces earlier this year, but the concerns still prompted Facebook to stop selling the Quest 2 in coordination with the US Consumer Product Safety Commission. […]

This is happening a month before Facebook is updating the Quest 2 with more storage: a new version of the $299 Quest that goes on sale Aug. 24 will have 128GB of storage instead of 64GB. Quest 2 models will include the silicone face-cover in the box from that point onward. It's awkward timing for the move, but also looks like a chance for Facebook to replace Quest 2 stock with models that have the silicone covers.

And: Facebook is exploring integrating Oculus Move workouts with Apple Health. (Mark Gurman / Bloomberg)

Pinterest will let creators earn commissions from shoppable pins. Makes sense to me. (Ashley Carman / The Verge)

Clubhouse saw fewer than 500,000 new downloads after opening up to everyone. Are those 10 million people who reportedly joined its waitlist ever going to show up there? (Arielle Pardes / Wired)

Discord added threaded conversations. The threads automatically archive after 24 hours. (Taylor Hatmaker / TechCrunch)

Instacart replaced CEO Apoorva Mehta after a history of chaotic management, according to this report. The company named former Facebook executive Fidji Simo to the post a earlier this month. (Tom Dotan)

TikTok announced its first regional security office in Dublin, Ireland. Its global security operations are run out of an office in Washington, DC. (TikTok)

Only a single company — Adobe — has mentioned “racial justice” in an earnings call so far this quarter. A year ago, terms related to racial justice were mentioned on at least 500 calls. (Payne Lubbers / Bloomberg)


Those good tweets


Talk to me

Send me tips, comments, questions, and child safety solutions: casey@platformer.news.