Meta's Nick Clegg on how AI is reshaping the feed

A year after recommended posts caused a crisis at Instagram, the company's systems have improved — and it's ready to talk about how they work

Meta's Nick Clegg on how AI is reshaping the feed

Programming note: With this edition of Platformer, we are officially on summer break. (See our posting schedule here.) As usual, we’ll be using this time to recharge and think about the future. Also as usual, don’t be surprised if I pop up with an extra edition or two if big news breaks. Breaks like these are a huge part of what makes Platformer sustainable over the long run — thanks to all of you who support us while we’re away. Look for new episodes of Hard Fork on Friday and on July 14, and the newsletter will be back July 17.

Nick Clegg, Meta’s president of global affairs, speaks during a press conference in Brussels in December. (Kenzo Tribouillard / Getty Images)
Nick Clegg, Meta’s president of global affairs, speaks during a press conference in Brussels in December. (Kenzo Tribouillard / Getty Images)

Last year, as the Instagram feed began to fill up with recommended posts, the company was thrown briefly into crisis. The once-familiar landscape of friends, family, and influencers you had chosen to follow had begun to be replaced by algorithmic guesses. “Make Instagram Instagram again,” pleaded Kylie Jenner. Many viral tweets followed in the same vein.

“When you discover something in your feed that you didn't follow before, there should be a high bar — it should just be great,” Instagram chief Adam Mosseri told me at the time. “You should be delighted to see it. And I don't think that’s happening enough right now. So I think we need to take a step back, in terms of the percentage of feed that are recommendations, get better at ranking and recommendations, and then — if and when we do — we can start to grow again.”

Mosseri told me he was confident Instagram would get there. And indeed, as I scroll through the app today, what the company calls “unconnected content” — posts from people you don’t follow — has once again roared to the forefront. After I watched a few Reels from one popular comedian that a friend had sent me, my Instagram feed quickly filled up with the Reels of his I hadn’t watched yet. 

As a longtime Instagram user, I still find all this somewhat jarring. But while recommendations are more prevalent than ever in the app, there’s no hint of the public uproar that consumed Instagram last summer. In part that’s because the recommendations really are better than they were a year ago; in part that’s because the trend that precipitated all this — increasing consumer demand for short-form video — continues to accelerate. 

Also, of course, it’s in part that eventually changes like these just wear us down. What once felt weird and bad now feels, through sheer force of repetition, mostly normal. 

But while the transition away from Facebook’s old friends and family-dominated feeds to Meta’s algorithmic wonderland seems to be proceeding mostly without incident, the move has given the company a new policy and communications challenge. If you’re going to recommend lots of posts for people to look at, you have to know why you’re making those recommendations. 

Without a thorough understanding of how the company’s many interconnected systems are promoting content, you can wind up promoting all sorts of harms. And even if you don’t, an app’s users will have a lot of questions about what they’re seeing. What exactly do you know about them — or think you know about them? Why are they seeing this instead of that? 

To some extent, of course, that’s not a new problem. Facebook, Twitter, and YouTube have long faced questions over why they promoted posts from some users and not others. But in a world where users were choosing what to follow, the questions essentially boiled down to what order the company’s ranking systems placed posts in. Now that the posts in your feed can come from anywhere, it all gets much more confusing.

“One of the biggest problems we have is because that interaction is invisible to the naked eye, it’s pretty difficult to explain to the layperson,” Nick Clegg, Meta’s president of global affairs, told me in an interview. “Of course, what fills that vacuum is the worst fears and the worst suspicions.”

That leads us to Meta’s move today to publish 22 “system cards” outlining why you’re seeing what you’re seeing in the company’s feeds. Written to be accessible to most readers, the cards explain how Meta sources photos and videos to show you, names some of the signals it uses to make predictions, and describes how it ranks posts in the feed from there.

In addition to publishing the cards, which most users probably won’t see, the company is bringing its “Why am I seeing this?” feature to Reels on Facebook and Instagram’s explore page. The idea is to give individual users the sense that they are the ones shaping their experiences on these apps, creating their feeds indirectly by what they like, share, and comment on. If works, it might reduce the anxiety people have about Meta’s role in shaping their feeds. 

“I think if we could dispel some of the mythology around that, it would be a very significant step forward,” Clegg said.

Of course, that depends in part on how the information in these system cards is received. While little in them seems like to surprise anyone who has spent much time on social media, seeing it all in black and white could fuel new critiques of Meta. Particularly If you’re the sort of person who worries that social apps are engineered to be addictive.

Reading the card for Instagram’s feed, for example, the signals Meta takes into account when deciding what to show you include “How likely you are to spend more than 15 seconds in this session,” “How long you are predicted to spend viewing the next two posts that appear after the one you are currently viewing,” and “How long you are predicted to spend viewing content in your feed below what is displayed in the top position.” 

The system cards, in other words, lay out how Meta works to get you to use their apps for long periods of time. To the extent that this dispels any mythology about the company, I wonder how useful it is to Meta.

Clegg told me that ranking content based on likely engagement isn’t much different from newspapers or book authors choosing stories that readers will likely enjoy. “I know that for some people ‘engagement’ is a dirty word,” he said. “I think it’s actually a lot more nuanced than that.”

Meta also pays attention to “slower time signals,” he said, measuring people’s satisfaction with the app overall rather than just individual posts, and regularly surveys users about their feelings. That all gets fed back into the product design too, he said.

“I don’t think it’s fair to say that all we’re trying to do is just to keep people doomscrolling forever,” he said. “We have no incentive — you’re just simply not going to retain people over time if that’s what you’re trying to solve for. And these system cards, by the way, would look quite different if that’s what we were trying to solve for.”  

Potentially even more useful is another new feature the company is testing, which will let users mark that they are “interested” in a Reel that the company showed them — essentially, giving an explicit endorsement to a recommended video. As the rare person who feels like the TikTok feed has never quite figured out what I really want to see there, I’m interested to see whether asking people for feedback like this more directly will lead to better feeds.

Speaking of TikTok, that company took its own crack at transparency by opening in-person algorithmic transparency centers, which are designed to offer visitors an in-person look at systems that are quite similar in many ways that the ones that Meta is describing today with its system cards. And given the difficult position TikTok is in with the US government, it’s fair to ask how much goodwill companies can actually generate with efforts like these.

One possibility is that TikTok publishing detailed explanations of ranking systems was good for it, but still didn’t satisfy lawmakers’ questions about potential interference from the Chinese government. For all its own issues, that’s one problem that Meta, as an American company, doesn’t have. 

The other possibility, though, is that transparency represents an effort to solve the wrong problem. In this view, it’s not that we don’t understand the contents of our feeds — it’s that we mostly know how these systems work, and we don’t like it.

On balance, though, I’ll take transparency efforts like this every time, if only because it’s difficult to build a better future when you barely understand the present. And on that front, I was heartened to see that Meta is expanding the work it’s doing with academic researchers. The company also announced today that it’s making a library of public posts, pages, groups, and events on Facebook available to qualified research institutions through an application process. The company says doing this will help it meet its obligations under Europe’s new Digital Services Act — one of the first concrete benefits we can expect to see from that law.

“Generally speaking, we believe that as these technologies are developed, companies should be more open about how their systems work and collaborate openly across industry, government and civil society to help ensure they are developed responsibly,” Clegg wrote in his blog post today.

And for once, Meta had adopted a position that almost no one could disagree with.


On the podcast this week: How generative AI has begun to make the web more annoying to read. Plus: the Times’ Joe Bernstein joins to discuss the potential MMA fight between Mark Zuckerberg and Elon Musk. And food reporter Priya Krishna stops by to talk about her experiments cooking with ChatGPT.

Apple | Spotify | Stitcher | Amazon | Google


Governing


Industry


Those good tweets

For more good tweets every day, follow Casey’s Instagram stories.

(Link)

(Link)

(Link)


Talk to us

Send us tips, comments, questions, and your summer vacation plans: casey@platformer.news and zoe@platformer.news.