OpenAI’s platform play

Facebook's social graph went down in the flames of Cambridge Analytica. Will the AI graph fare any better? PLUS: Our new approach to links

OpenAI’s platform play
OpenAI CEO Sam Altman (center) addresses reporters at OpenAI Dev Day on Monday in San Francisco. (Casey Newton / Platformer)

This is a column about AI. My boyfriend works at Anthropic. See my full ethics disclosure here.

In 2007, at its F8 developer conference, the company then known as Facebook introduced the idea of a social graph. Before then, social networks like MySpace kept all of your data within their walled gardens. This protected users’ privacy, but limited their utility. What Facebook proposed was something more ambitious: making your interests, preferences, and friend connections an object for developers to build on, in ways that continuously drove engagement (and thus revenue) back to the social network. What Facebook proposed, in other words, was a platform. 

I’ve noted here in recent weeks all the ways that OpenAI, which counts hundreds of former Meta employees among its ranks, is coming to resemble the company that it now increasingly competes with. OpenAI has recently introduced its first feed, which it calls Pulse; last week it added commerce functionality and a full-fledged social network called Sora.

All of these products begin from the observation that ChatGPT, which now has more than 800 million weekly users, commands an increasing share of attention from a rapidly expanding user base. That growth has made OpenAI wonder the same thing that Facebook executives did nearly two decades ago: can this service grow so important that it becomes a front door to the rest of the web, making it more personalized and useful for users — while also generating huge revenues for the company? 

On Monday, OpenAI introduced what could be its most ambitious platform play to date. At the company’s developer day in San Francisco, CEO Sam Altman announced apps inside ChatGPT: a way to tag other services in conversations with the chatbot that allow you to accomplish a range of tasks directly inside the chatbot. 

In a series of demonstrations, software engineer Alexi Christakis showed what ChatGPT looks like after it has turned into a platform. He tagged in educational software company Coursera to help him study a subject; he tagged in Zillow to search for homes in Pittsburgh. In one extended demo, he described a poster he wanted, and Canva generated a series of options directly within the ChatGPT interface. He then used Canva to turn that poster into a slide deck, also within the chatbot. 

Starting today, developers can build these integrations using OpenAI’s software development kit. In addition to those above, services that will work with the feature at launch include Expedia, Figma, and Spotify. In the next few weeks, OpenAI said that they would be joined by Uber, DoorDash, OpenTable, and Target, among others. 

Eventually, OpenAI plans to add a directory that users can browse to find apps that have been optimized for ChatGPT. 

Like Facebook did, OpenAI is launching its platform without a settled strategy for generating revenue. Facebook would eventually begin offering its own currency, Facebook Credits, and require popular applications like Zynga’s world-beating FarmVille to use it for transactions. (Facebook took a 30 percent cut; Zynga alone once accounted for 12 percent of all Facebook revenue.)

OpenAI seems more likely to monetize its platform through revenue-sharing deals or auctioning off placement. Maybe you ask for help with algebra, OpenAI loops in the Coursera app, and takes a finder’s fee if you become a paid user of the latter.

To OpenAI executives, the move helps them pursue what they describe as the goal they had before they got sidetracked by ChatGPT’s success: building a highly competent assistant.

“What you're gonna see over the next six months is an evolution of ChatGPT from an app that is really useful into something that feels a little bit more like an operating system,” Nick Turley, the head of ChatGPT, told reporters in a Q&A session on Monday. “Where you can access different services, you can access software — both the existing software that you’re used to using, but … most exciting to me, new software that has been built natively on top of ChatGPT.”

I believe Turley and other OpenAI executives when they say they believe the platform will make ChatGPT more useful. It also stands to make OpenAI much more powerful: with ChatGPT becoming a new default homepage for the web, and many of the web’s biggest businesses compressed into modular suppliers for its chatbot. 

Of course, there was a time when Facebook was the default homepage for many people in the United States, and it tracked closely with the company’s platform era. In the end, though, there turned out to be a time bomb ticking away inside its API: permissive rules that granted apps liberal access to people’s data with a single click on the part of the user. This culminated in the panic over Cambridge Analytica, which revealed that Facebook had enabled the sharing of 87 million users’ data in connection with a quiz app that would be used by a Trump campaign firm in the 2016 election.

While much of the panic around Cambridge Analytica’s ability to manipulate the electorate using Facebook data was overwrought, the larger fears around data privacy on Facebook were justified. By 2015, Facebook had forced developers to use a new, much more restrictive API that effectively ended its social platform ambitions. 

At launch, OpenAI is promising a more rigorous approach to data privacy. OpenAI will share only what it needs to with developers, executives said. (They essentially hand-waved through the details, though, so the actual mechanics will bear scrutiny.) Unlike Facebook, though, OpenAI has no friend graph to worry about — whatever might go wrong between you, ChatGPT, and a developer, it will likely not involve giving away the contact information of all of your friends. 

At the same time, the AI graph may prove even riskier. ChatGPT stores many users’ most private conversations. Leaky data permissions, either intentional or accidental, could prove disastrous for users and the company. It only took one real privacy disaster to end Facebook’s platform ambitions; I can’t imagine it would take much more to end OpenAI’s.

The other open question is whether the new economic incentives that OpenAI is introducing into ChatGPT will warp the user experience. In the same way that search engine optimization has rendered Google a shadow of its former self, it’s easy to imagine OpenAI auctioning off promoted app integrations to the highest bidder — at the cost of ChatGPT’s usefulness. 

“This is exactly why we're trying to keep an open mind right now — because it's impossible to foresee the user interaction effects between those decisions,” Turley said when I asked him today. “But I suspect that this is going to be an important issue, especially as you don't just have the existing contenders in a given category, but exciting new categories where everyone wants to play.”

Altman told me that the company is motivated to preserve users’ trust in ChatGPT.

“I think part of [that trust] is, even if ChatGPT screws up, you feel like it's trying to help you,” he said. “If we break that, or take payment for something we shouldn’t have instead of showing you what we think is best, that would clearly destroy that relationship very fast. So we’re hyper aware of the need to be careful.” 

Still, the company is leaving itself room to explore. 

“I do want to say that I think there's also a lot of nuance in this space,” Greg Brockman, OpenAI’s president, told me in response to the same question. “Because sometimes we don't know what the best product is, right? … We have a principle of really trying to serve the user, and then what does that mean in all these specific contexts?”

These were the sorts of questions that once led a much younger Google to adopt as a motto a phrase that would come to haunt it: don’t be evil. It’s easy to put users first before the revenue comes in. Once you’re operating a platform, though, the incentives can all start to look very different. 


Elsewhere at Dev Day: Codex is now generally available. AgentKit is just what it sounds like. (How does Sierra CEO / OpenAI board chairman Bret Taylor feel about this one?) More on the Apps SDK. API updates.

Elsewhere in OpenAI: In an effort to slake its insatiable desire for chips, OpenAI takes a 10 percent stake in AMD. In return, it will get chips.

The company also acqui-hired personal finance app Roi and its CEO, Sujith Vishwajith. Here's a look at the challenges OpenAI is facing is bringing its Jony Ive-designed hardware to the market, including a lack of cloud compute to power inference.  And here we're down to 20 finalists for Stargate sites.

Sponsored

Unknown Number Calling? It’s Not Random

The BBC caught scam call center workers on hidden cameras as they laughed at the people they were tricking. One worker bragged about making $250k from victims. The disturbing truth? Scammers don’t pick phone numbers at random. They buy your data from brokers.

Once your data is out there, it’s not just calls. It’s phishing, impersonation, and identity theft. That’s why we recommend Incogni: They delete your info from the web, monitor and follow up automatically, and continue to erase data as new risks appear. Try Incogni here and get 55% off your subscription with code PLATFORMER.

Introducing the Following feed

Last month I previewed some experiments we’re going to be running at Platformer. Today we have the first one for you, and we’re calling it the Following feed. 

When I asked you all for feedback, a minority of you told us you liked our links just as they were. But the majority of you told us that you only skim them, or pass over them entirely. And frankly, there were good reasons to do so. You all have lots of ways to see what happened in tech on any given day, and our approach wasn’t adding much value beyond telling you that a given item felt interesting enough to include. 

At the same time, we often had a lot to say about a given link that we simply withheld — or tried to communicate in a single sentence of text. Often a link tells you what happened next in a story we recently wrote a column about, for example. Or it connects to a larger trend we think is notable but haven’t yet written a full column about. 

The Following feed, which I’m putting together daily with my colleagues Lindsey Choo and Ella Markianos,  is our effort to address these and related problems. Here’s what we’re doing:

In each edition, underneath the column, you’ll now find Following. Each day, we’ll choose three to five stories that we’re following. For each item, we’ll tell you what happened, why we’re following it, and notable commentary about the story across the web and social platforms. One of our theories is that stories become notable when notable people are talking about them, and a good links section should capture that as well as the basic facts of the story.

What about other items of interest? You’ll now find those below the Following feed, in a section we’re calling Side Quests. Side Quests are nonrequired reading for our biggest platform junkies. And because we value your time, we’re focusing on making these links shorter and pithier — the way a good print magazine would be.

You’ll find our first effort below. As always, don’t be shy about giving us feedback — we plan to iterate based on your suggestions. And if you love the Following feed, and haven’t yet upgraded your subscription, now is a great time to do so.

We’ll have our next big experiment for you in the weeks ahead. In the meantime, here’s Following.

Following


Sora soars (and slips)

What happened: Sora, OpenAI's new shortform video app, reached #1 on the iOS App Atore over the weekend even as it remains invite-only. In keeping with its ready-fire-aim approach to products that may violate copyright, Sam Altman announced changes to Sora in a blog post on Friday. In short, OpenAI is adding new options for rightsholders to control how their IP is used. People will also get more fine-grained control over how others use their likeness in the “cameo” feature, Sora chief Bill Peebles said. The move came after the app had been used to generate tons of video featuring copyrighted characters and real people in compromising situations — including Altman himself.

Even as OpenAI seems to be focusing more on viral products, the team insists this is but a step in their grand mission towards AGI.

Why we’re following it: The cognitive dissonance between OpenAI the mission-focused, benefit-all-of-humanity AGI lab and OpenAI the copyright-flouting social media slop feed continues to make our brains hurt. On one hand, OpenAI is developing an impressive record of hit products that take over social media. On the other, it feels like a risky distraction — both for the company and the people scrolling Sora.

What people are saying:  Varun Shetty, the head of media partnerships at OpenAI, told Newcomer that the absence of copyright restrictions is due to competitive pressure: "We’re also in a competitive landscape where we see other companies also allowing these same sorts of generations. We don’t want it to be at a competitive disadvantage.”

After initially panning it, Ben Thompson says he has "done a complete 180 on Sora: this new app from OpenAI may be the single most exciting manifestation of AI yet."

YouTuber Casey Neistat has a video nearing 650,000 views with his thoughts on Sora as a creative tool. The video is notable for the way Neistat denounces the rise of slop while integrating many, many creative uses of Sora within the app — the whole debate in miniature.

Zelda Williams, daughter of the late comedian Robin Williams, begged people in an Instagram story to stop sending her Sora videos of her dad. "You're not making art, you're making disgusting, over-processed hotdogs out of the lives of human beings, out of the history of art and music, and then shoving them down someone else's throat hoping they'll give you a little thumbs up and like it," she wrote. "Gross."

Meanwhile, what do average users want? Fewer content restrictions. We found App Store reviews reading “It’s so censored it’s not even fun,” and “the “safeguards in place prevent you from making anything worth watching,” among other complaints.


App stores cave to Trump over ICE apps

What happened: The DOJ asked Apple to take down ICEBlock, a widely-used app that tracks the location of ICE officials, claiming it puts law enforcement officers at risk. Apple complied. Both Google and Apple have also removed similar apps, including Red Dot, from their app stores, though Google said the Department of Justice didn’t ask it to remove the apps. (Does that make it better, or worse?). Google's explanation: the apps shared the location of what it says is a "vulnerable group that recently faced violence" in connection to these apps.

Meanwhile, ICE is building out a 24/7 social media surveillance team. Surveillance for thee, but not for me.

Why we’re following it: The platforms' near-total capitulation to Trump Administration jawboning, after performatively squealing over Biden Administration jawboning, is both pathetic and worrisome.

What people are saying: “No one, not even [US attorney general Pam] Bondi, is claiming any aspect of ICEBlock is illegal,” writes John Gruber for Daring Fireball. Apple’s capitulation to the takedown demand is a display of weakness, Gruber adds, give that it would have likely prevailed in court.

Joshua Aaron, the developer of ICEBlock, said the app’s services are “no different from crowdsourcing speed traps,” which Apple itself implements as part of its Maps app. Google Maps has the same feature.


The new social media reckoning

What happened: The long-running debate about whether and how social networks are polarizing politics has a compelling new entrant: prominent historian Francis Fukuyama, who has come out as a social networks-are-polarizing person. "It’s the Internet, Stupid," reads the title of his new essay in Persuasion. After dispensing with alternate explanations, he writes:

While previously “truth” was imperfectly certified by institutions like scientific journals, traditional media with standards of journalist accountability, courts and legal discovery, educational institutions and research organizations, the standard for truth began to gravitate instead to the number of likes and shares a particular post got. The large tech platforms pursuing their own commercial self-interest created an ecosystem that rewarded sensationalism and disruptive content, and their recommendation algorithms, again acting in the interest of profit-maximization, guided people to sources that never would have been taken seriously in earlier times.

Why we’re following it: Essays about social network-fueled polarization are having a mini-boom on Substack. (Which just admitted it's a social media app, too, by the way.) Fukuyama's piece pairs nicely with Nathan Witkin's "The Case Against Social Media is Stronger Than You Think," published in August.

What people are saying: Steven Pinker gave Fukuyama's piece a shout-out.

Side Quests

Altman and OpenAI president Greg Brockman talk to Steven Levy. OpenAI says GPT-5 will now do a better job routing people in distress to more helpful responses.

OpenAI wants to dismiss a lawsuit alleging it poached xAI employees to steal trade secrets. Potential remedies to Google’s illegal ad tech monopoly. California’s rocky path to striking its landmark AI law. Elon Musk’s costly AI gamble in Memphis.

Caste bias in ChatGPT. Indonesia lifts TikTok’s suspension. India pushes homegrown apps over US alternatives. Deloitte issues a refund for an inaccurate Australian government report that used AI — and strikes a huge enterprise deal with Anthropic.

We did not know this: Social media usage apparently peaked in 2022 and has been declining since. 

Meta’s superintelligence teams are pushing employees to ditch slow internal tools in favor of third-party services like Vercel. Meta’s new Instagram Rings award program for creators. Google DeepMind unveils AI agent CodeMender. Rob Williams, who led device software and services at Amazon, is retiring. Amazon’s Q business AI tool struggles with accuracy. Cory Doctorow on the enshittification of Amazon.

Data center storage and memory needs could cause a decade-long price shock. One in four press releases are now AI-generated. Mercor debuts its AI Productivity Index. Andreessen Horowitz releases its first AI Spending Report.

Those good posts

For more good posts every day, follow Casey’s Instagram stories.

(Link)

(Link)

(Link)

Talk to us

Send us tips, comments, questions, and Following feedback: casey@platformer.news. Read our ethics policy here.