What is OpenAI going to do when the truth comes out?

Sam Altman’s deal with the Pentagon seems too good to be true. What happens when the public realizes that?

What is OpenAI going to do when the truth comes out?
(Shutterstock)

This is a column about AI. My boyfriend works at Anthropic. See my full ethics disclosure here.

"In [Murati’s] experience, Altman had a simple playbook: first, say whatever he needed to say to get you to do what he wanted, and second, if that didn’t work, undermine you or destroy your credibility … It had taken Sutskever years to be able to put his finger on Altman’s pattern of behavior — how OpenAI’s CEO would tell him one thing, then say another and act as if the difference was an accident. “Oh, I must have misspoken,” Altman would say. Sutskever felt that Altman was dishonest and causing chaos, which would be a problem for any CEO, but especially for one in charge of such potentially civilization-altering technology." — Keach Hagey, The Optimist

I.

I thought of this passage from The Optimist over the weekend as I worked to make sense of a rather stunning series of events. The Pentagon followed through with its threat to terminate the military’s contract with Anthropic over the company’s refusal to amend its prior agreement to permit “all lawful use” of its technology, including mass domestic surveillance and autonomous weapons. It further threatened to designate Anthropic as a “supply chain risk,” a move previously reserved for corporate extensions of foreign adversaries, and move to block any company that contracts with the military from using Anthropic’s products.

For the briefest of moments, it appeared as if Anthropic might have an ally in the fight: on Friday morning, Hagey (in her regular perch at the Wall Street Journal) reported that Altman had sent a memo to OpenAI’s staff saying that he would draw the same “red lines” Anthropic had.

“We have long believed that AI should not be used for mass surveillance or autonomous lethal weapons,” he wrote, “and that humans should remain in the loop for high-stakes automated decisions. These are our main red lines.”

And by Friday evening, Altman announced on X that OpenAI had reached an agreement with the Pentagon for classified AI deployment — with the same red lines, he claimed, now baked into the contract.

Setting aside for a moment the government’s unhinged retaliation against Anthropic, Altman’s claim to have won concessions from the US military offered at least some reason for hope. If powerful AI systems are to be embedded in systems of state violence, the least that Americans can ask for in return are mechanisms of oversight and restraint. Altman said OpenAI had achieved just that.

“Two of our most important safety principles are prohibitions on domestic mass surveillance and human responsibility for the use of force, including for autonomous weapon systems,” Altman said in an X post.  “The DoW agrees with these principles, reflects them in law and policy, and we put them into our agreement.”

Immediately, Altman’s claim fell under scrutiny. Was it not suspicious that OpenAI claimed to have won with just a few days of negotiating the concessions that Anthropic had not? Was it possible that the same Pentagon officials railing on X against the idea of a private company attempting to exert control of the military were now making an exception for OpenAI?

Was the public now, like Mira Murati and Ilya Sutskever before them, caught in the familiar Altman trap that begins with him telling them what they want to hear?

II. 

Notably, in this case few seemed to extend to Altman the benefit of the doubt. The most popular post on the ChatGPT subreddit over the past week is titled “You’re now training a war machine. Let’s see proof of cancellation”; it received more than 32,000 upvotes. Similar posts in that forum and the OpenAI subreddit also received tens of thousands of upvotes; the company also came in for extended criticism on Hacker News.

And as the weekend went on, additional reporting suggested that the knee-jerk cynicism triggered by OpenAI’s deal was justified.

In The Verge, Hayden Field reported that contrary to OpenAI’s public statements — and consistent with the military’s own framing of its demands — the company’s deal with the Pentagon includes fewer restrictions than Anthropic’s had.

She writes:

One source familiar with the Pentagon’s negotiations with AI companies confirmed that OpenAI’s deal is much softer than the one Anthropic was pushing for, thanks largely to three words: “any lawful use.” In negotiations, the person said, the Pentagon wouldn’t back down on its desire to collect and analyze bulk data on Americans. If you look line-by-line at the OpenAI terms, the source said, every aspect of it boils down to: If it’s technically legal, then the US military can use OpenAI’s technology to carry it out. And over the past decades, the US government has stretched the definition of “technically legal” to cover sweeping mass surveillance programs — and more.

OpenAI might be able to partially block the military’s efforts to conduct domestic surveillance by building classifiers and implementing other model-level safeguards, as it has said it will do. And yet it’s essential to remember that most tasks related to mass surveillance might not look that way to a model. The government can upload massive spreadsheets of data bought legally from data brokers and ask GPT models to conduct all sorts of analyses that will not identify themselves as efforts to build systems of oppression.

And in any case, we know that the Pentagon tried repeatedly to eliminate meaningful safeguards in Anthropic’s contract through innocuous-seeming word changes and a generous dusting of legalese. 

Ross Andersen described the process in The Atlantic. “The Pentagon had kept trying to leave itself little escape hatches in the agreements that it proposed to Anthropic,” he reported on Sunday. “It would pledge not to use Anthropic’s AI for mass domestic surveillance or for fully autonomous killing machines, but then qualify those pledges with loophole-y phrases like as appropriate — suggesting that the terms were subject to change, based on the administration’s interpretation of a given situation.”

Moreover, on the subject of autonomous weapons, Bloomberg reported last month that OpenAI is participating in a competition to develop software that will allow drones to be controlled via voice. (Anthropic participated in the competition, too — reminding us that Dario Amodei’s objection to murderbots isn’t that they are immoral, but that they don’t work very well yet.)

If you build voice controls for the murderbot but not the murderbot itself, is that consistent with OpenAI’s usage policy?

“It turns out that the usage policy can be read in a few ways,” writes Sarah Shoker, who led OpenAI’s geopolitics team for three years before leaving last June, on her Substack. “Depending on whether you believe that the use of an AI voice-to-digital tool in a kill-chain amounts to helping build a weapon, or if you believe that an AI model can be treated in isolation from its larger weapon system.”

The problem, Shoker writes, is that almost all of the relevant definitions here — again, the definitions relevant to whether and how you will be surveilled as an American, and which large language models might guide a drone swarm that someday attacks you — are up for debate.

“Policy and law are not free-floating static ‘things,’” she writes. “The borders of the law are fuzzy and filtered through political ideology. Throughout US history, policymakers have reinterpreted and exploited gaps in the law to allow for activity that independent legal observers have called straightforwardly illegal.”

She continues:

There isn’t a consensus over what it means in practice to have adequate ‘human supervision,’ ‘human in the loop’ or ‘meaningful human control’ in autonomous weapons systems. Terms that reference human oversight remain contentious around the world. Militaries are still trying to develop new testing and evaluation procedures for reducing problems like e.g. over-reliance in human-AI teams. It’s possible that Anthropic disagreed with how ‘human supervision’ (broadly speaking) would be put into practice.

A few frontier AI company employees have asked me about whether the ‘lawful purposes’ language is a sufficiently strong bulwark against misuse. The answer is always going to be it depends. You have to decide whether that’s good enough and if you trust your company leaders to respond effectively in case something goes wrong.

III.

As public opinion began to turn against OpenAI — uninstalls of ChatGPT were up nearly 300 percent over the weekend, market research firm Sensor Tower estimated — the company sought to reassure the public.

A blog post laid out what it described as a comprehensive, layered approach to ensuring its red lines are never crossed, and posted what it said is the “relevant” portion of its contract with the military. And Altman and some of his colleagues at the company answered questions from people on X.

Jessica Tillipman, an expert in government contracts and professor at George Washington University Law School, analyzed the deal and the surrounding debate. For starters, she said — and contrary to howling right-wing commentators who accused Anthropic of trying to subvert the democratic process by refusing to accept the military’s demands — “contractors restrict the government’s use of their products all the time.”

It is at least possible, she writes, that the safeguards OpenAI outlined would give it meaningful leverage to restrict the use of its models for whichever forms of surveillance and drone killing it takes issue with. But there is an enormous unanswered question — what happens when OpenAI and the military disagree? 

Tillipman writes:

If a classifier blocks a particular use, the question is whether the government has a contractual right to demand its removal. OpenAI asserts that it retains “full discretion” over those systems.

This creates tension at the heart of the agreement. The contract permits use “for all lawful purposes,” subject to “operational requirements” and “well-established safety and oversight protocols.” OpenAI says it retains full discretion over the safety stack it runs in a cloud-only deployment. If the safety stack blocks a lawful use, which provision controls? The answer depends on the specific contract language governing the relationship between the permissive use standard and the deployment framework — language that has not been made public.

The Pentagon reacted to its disagreement with Anthropic — over a contract it had once willingly signed — by announcing an effort to destroy the company. The idea that some vague contractual language and a “safety stack” will prevent Defense Sec. Pete Hegseth and his subordinates from taking a maximalist view of their rights to OpenAI’s intellectual property is either impossibly naive, or outright deceptive. 

In response to my questions, OpenAI pointed me to another X post from Altman that posted on Monday evening. In it, Altman said OpenAI plans to amend its contract with the Pentagon to add further restrictions on the use of its systems for surveillance, and that the National Security Agency will not be using GPT models. I'm told the Pentagon has agreed to the changes. These sound like meaningful improvements; we’ll see.

“One thing I think I did wrong: we shouldn't have rushed to get this out on Friday,” Altman added. “The issues are super complex, and demand clear communication. We were genuinely trying to de-escalate things and avoid a much worse outcome, but I think it just looked opportunistic and sloppy.”

Indeed. But in the end I’m left asking myself what will happen in the scenario that still seems disturbingly likely — that GPT models will in fact be used as part of surveillance and drone operations. Will it put up a blog post to explain that, well actually, that’s a lawful kind of surveillance? Do an AMA about how, despite how it may look, that autonomous drone swarm had proper human supervision? OpenAI does enough polling to understand that Americans already distrust and even openly loathe AI, even as they increasingly turn to it for work and school. How does it think Americans will feel when GPT models are powering ICE raids or causing civilian casualties in wars abroad? 

The company may have tied its own hands. In the end, the truth about US military operations always seems to come out one way or another. And when it does, I suspect the “all lawful use” standard that OpenAI agreed to will have permitted a far wider range of operations than we are now being told are possible. 

The problem with telling everyone what they want to hear is that eventually reality catches up with you. The people who will live under AI-powered surveillance, and the people in the flight path of AI-assisted drone swarms — they're the ones who are going to find out what OpenAI actually agreed to do. And I suspect it will be much more than the company now expects us to believe. 

On a bonus episode of the podcast: Kevin and I compare notes on a tumultuous weekend for Anthropic, OpenAI, the Pentagon, and the country. Recorded on Saturday morning.

Apple | Spotify | Stitcher | Amazon | Google | YouTube

Following

Everyone has something to say about the Pentagon, Anthropic, and OpenAI

What happened: As the Pentagon's "all lawful use" drama unfolded, people started quitting ChatGPT and switching to Claude. Reddit posts encouraging people to boycott ChatGPT have been getting tens of thousands of likes, and Anthropic’s Claude app reached no. 1 on the App Store. (And Anthropic was quick on the draw, releasing an improved tool for helping people to switch by loading context from other AI apps into Claude).

Anthropic receive strong declarations of support from tech workers, too. A coalition representing 700,000 employees across Amazon, Google, and Microsoft, demanded their companies “reject the Pentagon’s advances.” And an open letter from Google and OpenAI employees asked leaders to “refuse the Department of War’s current demands for permission to use our models for domestic mass surveillance and autonomously killing people without human oversight.”

Why we’re following: Oh god. Where to start? This week’s events will have long-lasting effects on Anthropic’s business; OpenAI’s reputation; the public’s view of AI; the future of warfare; and American citizens’ right to privacy. To say nothing about my cortisol levels.

We’re left wondering whether remaining tech stakeholders like Amazon and Google will listen to workers and the public, or negotiate new contracts with the DoD that allow their tech to be used to surveil citizens and make kill decisions.

What people are saying: Pop star Katy Perry weighed in on the situation on X with a screenshot of her signing up for a Claude Pro subscription, captioned “done.”

At the time of announcing their new deal with the Pentagon, OpenAI voiced some support for Anthropic on X, saying “we do not think Anthropic should be designated as a supply chain risk and we’ve made our position on this clear to the Department of War.”

Discussing OpenAI’s renegotiated agreement, OpenAI researcher Aidan McLaughlin wrote, “i personally don’t think this deal was worth it.” OpenAI safety researcher Cameron Raymond replied, “idk how the dust will settle but for now i feel similarly.”

OpenAI researcher Leo Gao took issue with OpenAI’s comms about the new DoD deal. “the contract snippet from the openai dow blog post is so obviously just 'all lawful use' followed by a bunch of stuff that is not really operative except as window dressing,” he wrote.

Gao’s OpenAI colleague Boaz Barak offered a high-minded response, “I’m proud to work at a company that contains people as brilliant and conscientious as Leo and allows them to speak their mind.” He is proud of OpenAI’s culture, he added: “OpenAI has a lot of issues, but in terms of enabling employee pushback and discussion it is in fact still open, with all the messiness that this entails.” But he added a sneak diss: “Leo is an amazing researcher and person but not a lawyer or a natsec expert,” and people looking to understand the situation should follow OpenAI’s head of national security partnerships, Katrina Mulligan.

In a cameo on the same thread that had me reeling, former U.S. Congressman Brad Carson responded to Barak. “I'm former general counsel of Army, former Undersecretary of Army, former Undersec of Defense. Not sure if that makes me a nat sec 'expert.' But,” he wrote, Gao’s interpretation of the OpenAI contract “is the right one, IMO.”

In a later exchange, Gao wrote about OpenAI’s culture, “I do notice that the vast majority of people with views similar to me has left openai over time. I also think a lot of people are scared of speaking their mind.” But, he said, “it could also be a lot worse, and I think it's worth being grateful for what we do have.” Um. At the very least, I’m grateful that as of a year ago, OpenAI no longer requires its employees to sign highly restrictive exit NDAs.

The episode also opened debate over what control tech companies should have over their government contracts. Stratechery writer Ben Thompson wrote that this should all be up to the government’s discretion: “what is the standard by which it should be decided what is allowed and not allowed if not laws, which are passed by an elected Congress?” He continued, “Anthropic’s position is that Amodei — who I am using as a stand-in for Anthropic’s management and its board — ought to decide what its models are used for, despite the fact that Amodei is not elected and not accountable to the public.”

Dean Ball, formerly an AI advisor in the Trump White House, wrote in a must-read post on Substack that the Pentagon’s retaliation against Anthropic “strikes at a core principle of the American republic, one that has traditionally been especially dear to conservatives: private property.” Pete Hegseth, Ball argued, “announced his intention to commit corporate murder” because a private company was attempting to set their own terms for a contract. Essentially, it sent the message, “do business on our terms, or we will end your business.”

This week’s events were an ominous sign for the future of AI governance, Ball wrote. “The Anthropic-DoW skirmish is the first major public debate that is truly about where the proper locus of control over frontier AI should be,” he said. And they were an awful one. “Our public institutions behaved erratically, maliciously, and without strategic clarity.”

When all is said and done, we’re left thinking about this generationally significant Onion headline.

Ella Markianos


Prediction markets are bad

What happened: Insiders keep trading on their inside information in ways that continuously make us ask: how is this legal?

OpenAI fired an employee after finding out the employee used confidential company information for their personal gain on prediction markets including Polymarket, OpenAI CEO of applications Fidji Simo told employees.

This follows two other recent instances of insider trading on Kalshi, including a case where a former California gubernatorial candidate Kyle Langford traded on his own candidacy, and another where an editor for YouTuber MrBeast bet on markets related to MrBeast videos. Kalshi said it discovered the MrBeast case after its monitoring systems flagged “near-perfect trading success on markets with low odds.”

The insider trading accusations also come amid a period of backlash for Kalshi following its decision to void some bets on the ouster of Iranian Supreme Leader Ali Khamenei.

Kalshi CEO Tarek Mansour said the platform doesn’t “list markets directly tied to death” to prevent people from profiting from death.

Why we’re following: Prediction markets have increasingly become the go-to for prediction data related to elections and markets. A number of news publishers have announced partnerships with prediction markets — most recently, the Associated Press announced it’s teaming up with Kalshi to make its US election results available on the platform ahead of the 2026 midterms.

As trading volume grows exponentially on prediction markets, it’s troubling to see — without clear rules — how many people could profit from anything from election results to war, especially with hidden advantages.

What people are saying: "I see you've placed your bet on Red. unfortunately in this casino, we call that color Bleen. you get $0. we'll keep your money. thanks for playing!” computer scientist Ben Anderson quipped about the Khamenei decision.

“Welcome to 2026 where a main talking point around war is whether prediction markets should include targeted assassinations as ‘being out as leader’,” @mert wrote on X.

@jellymanguy highlighted Kalshi’s hypocrisy on its policies related to death, pointing to the bets it settled that former president Jimmy Carter would not attend President Trump’s inauguration, knowing that people were betting the then 100-year-old Carter would die before the event: “this has nothing to do with death, this has everything to do with your bottom line.”

On insider trading, “I imagine this is gonna play out like the ufc’s approach to drug testing: drag a couple idiots who were too obvious about it into the public square every once in a while and look the other way the rest of the time,” wrote Nathan Grayson, cofounder of news site Aftermath.

—Lindsey Choo

Side Quests

The DoD was in talks with leading AI companies about partnerships to conduct automated reconnaissance of China’s power grids, utilities and sensitive networks. Multiple federal agencies raised concerns about Grok's safety and reliability in recent months, before the DoD approved Grok for use in classified settings.

The U.S. Supreme Court declined to hear a dispute over copyrights for AI-generated material. The case was brought by a computer scientist who was denied a copyright for AI-generated art. A federal judge issued a preliminary injunction blocking Virginia from enforcing a new law restricting children's social media use, on First Amendment grounds.

X is full of disinformation about the U.S. and Israel attacks on Iran, including old videos attributed as recent and AI-generated images. Iranians have turned to Starlink, decentralized messaging apps, and VPNs to circumvent the Internet blackout, and are sharing videos of U.S. and Israeli airstrikes.

Amazon Web Services said its facilities in the Middle East were facing power and connectivity issues after unidentified “objects” struck its data center in the UAE.

OpenAI raised $110 billion at a $730 billion valuation, up from $500 billion in October. Amazon invested $50 billion, while Nvidia and SoftBank invested $30 billion each. OpenAI said ChatGPT has over 900 million weekly active users, and over 50 million consumer subscribers.

OpenAI said it would overhaul safety protocols and establish direct contact with Canadian police, after failing to alert authorities about messages the Tumbler Ridge suspect was sending to ChatGPT.

The plaintiff in Meta’s big social media addiction trial testified that her social media use, which began in childhood, exacerbated depression and suicidal thoughts. Meta filed lawsuits against four alleged scam advertising operations based in Brazil, China and Vietnam. Court documents from a New Mexico trial showed internal divisions at Meta as Instagram teen safety initiatives conflicted with growth and engagement goals.

Chinese military procurement documents show the PLA's efforts to use AI to assist in drone piloting, cyberattacks, decision-making, and disinformation campaigns.

Australia's eSafety Commissioner threatened action against app stores and search engines if AI services operating in Australia don't verify user ages by March 9.

A profile of Telegram CEO Pavel Durov, who faces an investigation in France on a dozen preliminary charges and a criminal case in Russia for “aiding terrorism.”

TikTok is back in Albania after a year-long ban expired this month. The Albanian government said TikTok added "important filters for security and language."

The rise of Claude Code is fueling productivity panic among engineers and executives. (A UC Berkeley study found people who adopt AI tools work longer hours.)

Anthropic said “a fix has been implemented” after a few hours of elevated errors on claude.ai, Claude Code, and some API methods.

Meta scrapped the most advanced AI chip it was developing after struggling with the design, switching focus to a simpler chip.

Co-founder Toby Pohlen left xAI, making him the seventh of twelve co-founders to depart. Elon Musk bashed OpenAI in a lawsuit deposition, saying “nobody committed suicide because of Grok.” X added a “Paid Partnership” label that creators can apply to their posts to indicate they’re advertisements.

Those good posts

For more good posts every day, follow Casey’s Instagram stories.

(Link)

(Link)

(Link)

Talk to us

Send us tips, comments, questions, and amended contract language: casey@platformer.news. Read our ethics policy here.