The synthetic social network is coming

Between ChatGPT’s surprisingly human voice and Meta’s AI characters, our feeds may be about to change forever

The synthetic social network is coming
Meta’s AI assistant characters (Meta)

Today, let’s consider the implications of a truly profound week in the development of artificial intelligence, and discuss whether we may be witnessing the rise of a new era in the consumer internet.


On Monday, OpenAI announced the latest updates for ChatGPT. One feature lets you interact with its large language model via voice. Another lets you upload images and ask questions about them. The result is that a tool which was already useful for lots of things suddenly became useful for much more. For one thing, ChatGPT feels even more powerful as a mobile app: you can now chat with it while walking around town, or snap a picture of a tree and ask the app what you’re looking at. 

For another, though, adding a voice to ChatGPT begins to give it a hint of personality. I don’t want to overstate the case here — the app typically generates dry, sterile text unadorned by any hint of style. But something changes when you begin speaking with the app in one of its five native voices, which are much livelier and more dynamic than what we are used to with Alexa or the Google assistant. The voices are earnest, upbeat, and — by nature of the fact that they are powered by an LLM — tireless. 

It is the earliest stage of all this; access to the voice feature is just rolling out to ChatGPT Plus subscribers, and free users won’t be able to us it for some time. And yet even in this 1.0 release, you can see the clear outlines of the sort of thing popularized in the decade-old film Her: a companion so warm, empathetic and helpful that in time its users fall in love with it. The Her comparisons are by now cliche when discussing AI in Silicon Valley, and yet until now its basic premise has felt like a distant sci-fi dream. On Thursday I asked the speaking version of ChatGPT to give me a pep talk to hit my deadline — I was running back from the Code Conference and already behind schedule — and as the model did its best to gas me up, it seemed to me that AI had taken an emotional step forward.

You can imagine the next steps here. A bot that gets to know your quirks; remembers your life history; offers you coaching or tutoring or therapy; entertains you in whichever way you prefer. A synthetic companion not unlike the real people you encounter during the day, only smarter, more patient, more empathetic, more available.

Those of us who are blessed to have many close friends and family members in our life may look down on tools like this, experiencing what they offer as a cloying simulacrum of the human experience. But I imagine it might feel different for those who are lonely, isolated, or on the margins. On an early episode of Hard Fork, a trans teenager sent in a voice memo to tell us about using ChatGPT to get daily affirmations about identity issues. The power of giving what were then text messages a warm and kindly voice, I think, should not be underestimated.


OpenAI tends to present its products as productivity tools: simple utilities for getting things done. Meta, on the other hand, is in the entertainment business. But it, too, is building LLMs, and on Wednesday the company revealed that it has found its own uses for generative AI and voices.

In addition to an all-purpose AI assistant, the company unveiled 28 personality-driven chatbots to be used in Meta’s messaging apps. Celebrities including Charli D’Amelio, Dwyane Wade, Kendall Jenner, MrBeast, Snoop Dogg, Tom Brady, and Paris Hilton lent their voices to their effort. Each of their characters comes with a brief and often cringeworthy description; MrBeast’s Zach is billed as “the big brother who will roast you — because he cares.”

All of this feels like an intermediate step to me. To the extent that there is a market of people who want to have voice chats with a synthetic version of MrBeast, the character they want to interact with is MrBeast — not big brother Zach. I haven’t been able to chat with any of these character bots yet, but I struggle to understand how they will have more than passing novelty value.

At the same time, this technology is new enough that I imagine celebrities aren’t yet willing to entrust their entire personas to Meta for safekeeping. Better to give people a taste of what it’s like to talk to AI Snoop Dogg and iron out any kinks before delivering the man himself. And when that happens, the potential seems very real. How many hours would fans spend talking to a digital version of Taylor Swift this year, if they could? How much would they pay for the privilege?  

While we wait to learn the answers, a new chapter of social networking may be beginning. Until now when we have talked about AI in consumer apps it has mostly had to do with ranking: using machine-learning tools to create more engaging and personalized feeds for billions of users. 

This week we got at least two new ways to think about AI in social feeds. One is AI-generated imagery, in the form of the new stickers coming to the Meta’s messaging apps. It’s unclear to me how much time people want to spend creating custom images while they text their friends, but the demonstrations seemed nice enough.

More significantly, I think, is the idea that Meta plans to place its AI characters on every major surface of its products. They have Facebook pages and Instagram accounts; you will message them in the same inbox that you message your friends and family. Soon, I imagine they will be making Reels.

And when that happens, feeds that were once defined by the connections they enabled between human beings will have become something else: a partially synthetic social network.

Will it feel more personalized, engaging, and entertaining? Or will it feel uncanny, hollow, and junky? Surely there will be a range of views on this. But either way, I think, something new is coming into focus.

Code Conference

I had a memorable week in Southern California this weekend during the Code Conference, and I’m grateful to Vox Media for giving me the chance to co-host the event with my friends Nilay Patel of The Verge and CNBC’s Julia Boorstin. I got to have two great conversations on stage: one with Artifact co-founder Mike Krieger, who broke the news that the AI news reading app will now let users post words and pictures without links; and one with AI safety researchers Ajeya Cotra and Helen Toner, the latter of whom sits on the board of OpenAI. When those sessions are made available on YouTube, I’ll be sure to link them here.

At the risk of disappointing anyone who came here today looking for a column on Julia’s interview with X CEO Linda Yaccarino, I don’t have much to say on the subject that hasn’t already been said. Yaccarino fended off most of Julia’s excellent questions with GPT-2-level responses, punctuating her answers with dutiful praise for Elon Musk and the “velocity of change” he brings to the company. I’m grateful Yaccarino took a turn in the hot seat, but in the end she had little to offer — just some numbers that will never be audited, and explanations that don’t add up.

You can watch Yaccarino’s appearance here.

On the podcast this week: Kevin and I catch up on a huge week of AI news. Then, I talk about my experience with the Meta Quest 3. And finally, notorious prompt engineer Riley Goodside joins us to talk about what it’s like to have one of the weirdest jobs in tech today.

Apple | Spotify | Stitcher | Amazon | Google



Those good posts

For more good posts every day, follow Casey’s Instagram stories.




Talk to us

Send us tips, comments, questions, and synthetic social posts: and