Five ways of thinking about OpenAI's new browser
Hitting close to Chrome

Programming notes: This is a column about AI. My boyfriend works at Anthropic. See my full ethics disclosure here. Also, Platformer will be off Thursday to take our first reporting day, as discussed in this post.
Today, let's talk about Atlas — OpenAI's long-rumored, ChatGPT-powered web browser, which is now available for macOS.
I'm fond of saying that the worst day to review new large language models is the day they are released: it's simply too hard to put them through their paces in a few hours, and at their current level of sophistication it can take days or weeks to fully understand how they differ from their rivals and predecessors.
It turns out that an AI-powered browser is much the same. Like Perplexity's browser Comet, or the Browser Company's competitor Dia, Atlas is built on Chromium — the open-source web browser developed by Google that also serves as the basis for Chrome. As such, Atlas will be immediately familiar to any user of Chrome: on setup, you can import your bookmarks and key settings directly from your old browser, and navigate the new browser almost exactly as you did the old one.
The familiarity has pros and cons. On the plus side, Atlas is easy to use, and doesn't require learning a new workflow or giving up your favorite old Chrome extensions. (You can install those from the Chrome Web Store, just as if you were using Chrome itself.)
At the same time, Atlas can look so familiar that it's not entirely clear why anyone would switch away from Chrome. Before its miraculous $610 million sale to Atlassian, the Browser Company spent years flailing in search of a feature set compelling enough to get people to leave Chrome for its Arc browser. But even after years of reshuffling user interface elements, and countless Browser Company blog posts and YouTube videos pondering the future of the browser, Arc barely surpassed 1 million App Store downloads.
With Atlas, OpenAI is making a narrower set of promises. If you find yourself increasingly using ChatGPT more than Google, you may prefer Atlas to Chrome. Where Chrome is built to capture, monetize, and improve Google search, Atlas is built to do the same for ChatGPT.
For the moment, and for most people, it still may not feel worth the effort. People already have myriad ways to use ChatGPT in and out of the browser: on the web, in the mobile app, in the desktop app, in voice mode, in third-party browser extensions, and so on.
But one lesson Google learned early on is that the more it could get people to use search, the more powerful it became. Searches generate data that can be used to improve search indices and large language models; they create surfaces to insert advertising; and they become a hook to tie people up in Google's ecosystem. Increasing search volume creates a flywheel effect that makes it harder for competitors to catch up: the company that has the most data can often provide the best search results. (This is one reason why one approved remedy in the Google antitrust trial is for the company to share data about its search index with competitors.)
With Atlas, OpenAI is simply applying that same logic to the AI era. ChatGPT already attracts 800 million weekly users, many of whom are already using it as a replacement for traditional search. But many of those people are entering their queries into ChatGPT through Chrome, which over time will seek to do more and more of what ChatGPT does today. Atlas is an effort to take away market share from Google before that happens.
I've only spent a few hours with Atlas so far, so for the most part I'll reserve my judgments until I've used it more. I've also spent the afternoon reading commentary on the new browser, and here are five ways to think about Atlas as you consider whether to try it.
It’s a commentary on how slowly Chrome has evolved. Chrome has added AI features in the past year — but can you name them? If you can name them — can you find them? The browser's "Gemini in Chrome" feature, which lets you chat with tabs in much the same way as you can in Atlas and other AI browsers, feels tentative and tacked on compared to its rivals.
Atlas' big move is to surround every page with ChatGPT: your past chats and various tools in the left sidebar, an open chat in the right sidebar, and an agent that will take over the browser for you and attempt to get things done. (It will do so with excruciating slowness, and probably not to your specifications. But if you want Atlas to shop on Instacart for you, it will try.)
Other novel features include "cursor mode," which opens a ChatGPT window over highlighted text, allowing you to request that the browser transform it into something shorter, longer, in Spanish, and so on. A memory feature lets you ask it about tabs you browsed in the past; Atlas will resurface them upon request.
It will take at least a few weeks to see whether these features live up to the billing. (And in the case of agent mode, it will likely take a few more model upgrades at least.) But if nothing else, there is at least a theory here of why someone should use Atlas. It's "making ChatGPT your whole personality," but as a browser.
It's a distribution play. So says analyst Benedict Evans, adding that Atlas is also a data collection play. Until now, OpenAI has focused on making ChatGPT available wherever you might already be: the web, on your phone, and so on. With Atlas, OpenAI aspires to make ChatGPT a proper destination of its own. The browser is one of, if not the most-used apps on any computer; if OpenAI can own the browser it can better control its own destiny.
It’s a security nightmare. For months now, blogger and developer Simon Willison has been sounding the alarm about the risk that AI agents will suffer prompt injection attacks: malicious inputs that trick agents into harming you. By embedding invisible instructions in web pages instructing agents to steal your data or taking unwanted actions on your behalf, prompt injections can cause a lot of harm. And as Willison has diligently documented, there is currently no foolproof method for preventing them.
For that reason, Willison's enthusiasm for Atlas is muted. He writes:
The security and privacy risks involved here still feel insurmountably high to me — I certainly won't be trusting any of these products until a bunch of security researchers have given them a very thorough beating.
I'd like to see a deep explanation of the steps Atlas takes to avoid prompt injection attacks. Right now it looks like the main defense is expecting the user to carefully watch what agent mode is doing at all times!
In fairness, that's also the main defense Anthropic asks of users of its Claude for Chrome browser agent. While AI companies say they have trained their systems to be on high alert for prompt injections, serious risks remain.
It’s mostly just Chrome. OpenAI had an opportunity to do something visually striking or conceptually novel with Atlas. But the first version of the product is surprisingly bare-bones: a stripped-down version of Chrome that has seen Google services ripped out in favor of OpenAI's. You can understand why this would appeal to OpenAI employees, who spend all day dogfooding their own product. But Atlas still struggles to answer the basic question of why you wouldn't just continue using ChatGPT in the app or in your browser.
It’s too early to guess whether people really want this. While some commentators have suggested that the browser wars are back, it strikes me that none of the AI browsers to date have really made a dent in Chrome. Unless you do a lot of research in your browser, "chat with your tabs" can feel like a solution in search of a problem. AI labs appear to be under the impression that all anyone does in a browser is book vacations and order groceries, and that what they want is an agent who can do that worse and more slowly than the user could do it themselves.
At the same time, somewhere in here there is probably a good product to be built. An agent that anticipates your needs online, aids you in your tasks, and doesn't accidentally give away all your banking information in the process could well be more useful than Chrome is today.
Atlas represents a half-step in that general direction. But getting to the finish line will require some significant breakthroughs.

Sponsored
Cut Code Review Time & Bugs in Half

Code reviews are critical but time-consuming. CodeRabbit acts as your AI co-pilot, providing instant Code review comments and potential impacts of every pull request.
Beyond just flagging issues, CodeRabbit provides one-click fix suggestions and lets you define custom code quality rules using AST Grep patterns, catching subtle issues that traditional static analysis tools might miss.
CodeRabbit has so far reviewed more than 10 million PRs, installed on 1 million repositories, and used by 70 thousand Open-source projects. CodeRabbit is free for all open-source repos.

Following
Amazon’s robot plan
What happened: Amazon, the second largest employer in the US, plans to use robots to eliminate the need to hire 600,000 people, according to internal documents viewed by the New York Times. While sales are expected to double by 2033, executives have told Amazon’s board they’re hopeful robots will let them avoid hiring more people in the US in the coming years.
The ultimate goal, according to Amazon’s robotics team, is to automate 75 percent of the company’s operations. In the near term, the robotics team expects the company to be able to avoid hiring more than 160,000 people it would otherwise need by 2027 — shaving 30 cents off the cost of each item the company picks, packs and delivers.
(Amazon says the documents reflected the viewpoint of one group inside the company and said the company planned to hire 250,000 people for the coming holiday season, declining to say how many of those roles would be permanent.)
The robots already deployed in warehouses—named Sparrow, Cardinal, and Proteus, among others — work in a new system named Sequoia. Sparrow the robotic arm picks out packages; Cardinal the beefier robotic arm grabs boxes and stacks them; and Proteus, a tortoise-looking robot, carries the carts to shopping docks.
Why we’re following: Amazon’s internal documents reveal one of the first known plans to eliminate a massive number of jobs through robotics. Other companies have also invested heavily in robotics — OpenAI has hired a slew of researchers to work on humanoid systems, and Meta CTO Andrew Bosworth recently said humanoid robots are Meta's next “AR-size bet.”
But most progress so far has been theoretical. Amazon's plans are so far along that the company is already gaming out a strategy to counter the expected backlash to the loss of jobs, including participating in community events like Toys for Tots. (Amazon said its community involvement is not related to automation.)
What people are saying: “Nobody else has the same incentive as Amazon to find the way to automate,” Daron Acemoglu, a professor at MIT and a Nobel Prize winner in economic science, told the Times. If the plan works out, “one of the biggest employers in the United States will become a net job destroyer, not a net job creator,” he said.
The Amazon Labor Union, in an X post, accused the company of “using our data and labor to design machines to replace us.”
Author Chuck Wendig questioned how the move would add overall value to the economy in a post on Bluesky: “Step One: Fire the human workers and help degrade the total economy with both robotics and AI so now millions of out-of-work Americans don’t have money to spend buying shit on Amazon and also robots cost money but don’t spend money. Step Two: ??? Step Three: Profit!”
—Lindsey Choo
Attacks against Anthropic
What happened: Anthropic is pushing back on claims that the company is out of step with the Trump Administration's AI plans. On Tuesday the company published a blog post from CEO Dario Amodei “on Anthropic's commitment to American AI leadership,”following recent accusations from Republicans that they “have an agenda to backdoor woke AI.”
Amodei said Anthropic is in “alignment with the Trump administration on key areas of AI policy,” noting its support of Trump’s AI Action Plan, its contract with the Department of War, and various meetings Amodei and other Anthropic leaders have held with the Trump administration.
The post came several days after White House AI Czar and venture capitalist David Sacks claimedthat Anthropic is responsible for a “state regulatory frenzy” that was “damaging the startup ecosystem.” The criticism came after Anthropic endorsed California's SB 53, an AI regulation bill, in September.
Amodei says the bill doesn't affect startups because it exempts all companies with revenue under $500 million. He added that startups “are among our most important customers,” which means that “damaging that ecosystem makes no sense for us.”
Amodei also said studies of model bias show that Anthropic's are “less politically biased than models from most of the other major providers.” The company is “making rapid progress towards our goal of political neutrality,” he said.
Why we’re following: In our hyper-polarized times, most big tech companies have decided to show fealty to the Trump Administration in one way or another. It's telling that Anthropic, which has long prioritized AI safety above almost everything else, feels the need to ally itself with an administration that has largely sneered at existential risk. The company will argue that it needs to build bridges in order to have a seat at the table, and it may well be right. On the other hand, most platforms that crawl into Trump's pocket soon find that making small compromises only leads to demands to make larger ones.
What people are saying: It all started when Sacks tweeted that Anthropic is running a “regulatory capture strategy based on fear-mongering," in response to a piece Anthropic head of policy Jack Clark posted on Substack on how he is “deeply afraid” of the risks advanced AI could pose to humans.
Clark told Bloomberg he found Sacks’ comments “perplexing.” “In many areas we’re extremely lined up with the admin,” he said. “There are some areas where we have a slightly different view, and we articulate that view in a substantive, fact-forward way.”
The same day, Consumers’ Research, a conservative organization that leads boycott campaigns against “woke” companies, released an article headlined “Meet Anthropic: The Wokest AI Company.”
Since then, popular conservative influencers have shared many similar X posts linking to the article. Talk show host Joe Pags shared a post saying that Claude models “showed a 100% liberal bias” and Anthropic leaders “poured” millions of dollars into “Democrat causes.” The post disclosed a “Partnership with Consumers' Research.” Conservative influencers Ryann McEnany, Arynne Wexler, and Olivia Krolcsyk posted links to the article discussing the same two talking points.
OpenAI investor Reid Hoffman defended the company on X, saying “Anthropic is one of the good guys.” He framed Anthropic as part of a cohort of companies who care sufficiently about the impact of their tech: “Anthropic, along with some others (incl Microsoft, Google, and OpenAI) are trying to deploy AI the right way, thoughtfully, safely, and enormously beneficial for society.”
—Ella Markianos

Side Quests
The Trump administration clarified that the $100,000 H1-B visa fee will only apply to new visa applicants outside the US. Federal agencies became the most-blocked accounts on Bluesky. A look at President Trump’s growing embrace of AI-generated propaganda.
Mark Zuckerberg, Instagram head Adam Mosseri and Snap CEO Evan Spiegel must testify at an online child safety trial, a judge ruled.
Apple accused the EU’s DMA of imposing “hugely onerous and intrusive burdens.” Chile is facing a political debate around whether to embrace AI for economic reasons or reject it on environmental grounds. Filipino “chatters” engaging users on behalf of OnlyFans creators say they are struggling with mental health and burnout. Meanwhile, OnlyFans CEO Keily Blair said it has paid creators $25 billion since 2016.
How Sam Altman’s dealmaking ties Silicon Valley giants to the fate of OpenAI. Sora is allowing users to make fetish content with other people’s faces. OpenAI hired more than 100 ex-bankers for a project codenamed Mercury. Airbnb CEO (and Altman ally) Brian Chesky says ChatGPT isn't ready for an Airbnb integration.
YouTube rolled out a "likeness detection" program for creators, promising to prevent unwanted deepfakes of platform creators. (Good!) A review of the Oakley Meta Vanguard smart glasses mostly approves of them. Meta AI’s app installs and daily active users jumped after its introduction of the Vibes feed. Yelp’s AI agent can help take your reservations over the phone. How to use Perplexity AI Pro on your Samsung TV.
Also: HBO Max raised its prices for the third year in a row.

Those good posts
For more good posts every day, follow Casey’s Instagram stories.

(Link)

(Link)

(Link)

Talk to us
Send us tips, comments, questions, and Atlas feature requests: casey@platformer.news. Read our ethics policy here.