The OpenAI saga isn’t over just yet

A new board and a promised investigation could threaten Altman’s happy ending

The OpenAI saga isn’t over just yet
(Jaap Arriens / Getty Images)

On the surface, the OpenAI team that showed up to work today looks almost identical to the one that showed up on November 17, the morning that CEO Sam Altman was suddenly and shockingly fired, roiling the tech world and leaving the company’s future in question. Ten days later, Altman is back, along with company president Greg Brockman and the hundreds of employees who threatened to resign if OpenAI’s board would not reverse its decision. Viewed from a sufficient distance, it appears odd that the net result of Silicon Valley’s most engrossing drama in recent memory was a barely modified status quo.

The conventional wisdom now holds that OpenAI’s nonprofit board overplayed its hand, communicated its decision and motives terribly, and disqualified itself from governing the most important company of its generation. And I basically agree with all that, as I wrote here last week. Whatever reasons the board may have had for declining to outline with any specificity why they fired the CEO who had led their organization to great success, in the end their silence doomed them. 

And yet: for everything they failed at, the board does appear to have succeeded in putting a new governance structure into place. Altman and Brockman are no longer on the board; Quora CEO Adam D’Angelo, who voted to fire Altman, remains as well, and will help to fill out the rest of its membership. Bret Taylor, the former Twitter board chair, and Larry Summers, the former US Treasury secretary, will join D’Angelo on the board; together, they will appoint up to six new members. In addition, the new board will commission an independent investigation into the events surrounding Altman’s firing.

Corporate investigations vary widely in quality and rigor, and it remains to be seen whom the board will ask to conduct this one. There’s no guarantee that the results of the investigation will be made public, though there’s a case to be made that they should be, at least in part. And whatever the investigation finds, it’s not clear at this point what it would have to find to justify Altman’s removal — especially to the company’s employees, whose near-unanimous support for their CEO is all but unheard of in Silicon Valley.

Still: it feels too early to declare the story over. The soon-to-be-former members of OpenAI’s board will speak eventually: to investigators, the public, or both. Squads of investigative reporters are now digging into Altman’s sprawling web of investments. 

It is quite possible that a few years from now, the events of this month will be a footnote in the history of OpenAI’s ascent. But with a new board arriving and an investigation about to begin, I wouldn’t be surprised if this story didn’t have another twist or two in store.

I spent the past several days chatting with sources close to OpenAI and its board. Here are a few items I can report that may bring some additional texture to the past week’s events.

Once the board determined that it had a majority of members willing to fire Altman, it felt a pressure to act quickly. The board appears to have anticipated correctly that Altman would muster enormous resources to prevent his removal from the company. Once it determined it had a majority of members willing to fire Altman, it sought to move before he got wind of the decision. 

But in the board’s haste to fire him, it failed to plan for everything that would follow: starting with employees’ utter incredulity at what was happening, and the minimal explanations that would be offered to support the board’s decision. Moving quickly may have meant that the board was able to fire Altman before he could stop it, but it also created the conditions for his return.

Still, Altman got a worse deal on Tuesday than he did the day he was fired. I’m told that Altman’s team originally called for him to be reinstated and every current member of the board to resign. By holding out for a few days, the board did get some concessions: D’Angelo remaining on the board; the outgoing board members getting input on members of the new board; and the independent investigation. It’s not nothing.

Bret Taylor had been considered for OpenAI’s board before. OpenAI had been seeking new board members ever since three people stepped down from it earlier this year; one mystery has been who was under consideration. I can report that one person the board had talked to was Taylor, a highly regarded entrepreneur and board operator who also served a stint as co-CEO of Salesforce. But the board had been unable to come to a consensus on any new members before the firing, I’m told.

The board never received any formal communication about Q*. One of the more intriguing stories about the drama to come out over the past few days concerns Q* (pronounced “Q-star”), an AI model that can solve basic math problems. The Information (which has really done outstanding work on the whole OpenAI story) reported that Q* “​​raised concerns among some staff that the company didn’t have proper safeguards in place to commercialize such advanced AI models.”

That story followed a report from Reuters that said “several staff researchers wrote a letter to the board of directors warning of a powerful artificial intelligence discovery that they said could threaten humanity.” I can report that the board never received any such letter about Q*.

The board never received the letter that Elon Musk posted, either. Last week a letter purporting to be from OpenAI staffers briefly appeared on GitHub. Like the board’s message in firing Altman, it was notably short on specifics. “Throughout our time at OpenAI, we witnessed a disturbing pattern of deceit and manipulation by Sam Altman and Greg Brockman, driven by their insatiable pursuit of achieving artificial general intelligence,” the letter read.

Musk posted a link to the letter on X, along with the comment “These seem like concerns worth investigating.” 

In any case, I’m told, no such letter was ever received by the OpenAI board.

How Will AI Affect the 2024 Election?

Tuesday, November 28, 6–7 p.m. ET

2024 will bring the first presidential election of the generative AI era. As artificial intelligence produces output that is increasingly difficult to distinguish from human-created content, how will voters separate fact from fiction? The Brennan Center for Justice and Georgetown University’s Center for Security and Emerging Technology are convening experts to examine these and other critical questions about how AI might impact election security, voter suppression, election administration, and political advertising and fundraising.

Join the Brennan Center and CSET for this live virtual panel, which will explore what steps the government, the private sector, and nonprofits should take to minimize the possible dangers while harnessing the benefits of these new and powerful tools.



Those good posts




Talk to us

Send us tips, comments, questions, and posts: and