On the surface, the OpenAI team that showed up to work today looks almost identical to the one that showed up on November 17, the morning that CEO Sam Altman was suddenly and shockingly fired, roiling the tech world and leaving the company’s future in question. Ten days later, Altman is back, along with company president Greg Brockman and the hundreds of employees who threatened to resign if OpenAI’s board would not reverse its decision. Viewed from a sufficient distance, it appears odd that the net result of Silicon Valley’s most engrossing drama in recent memory was a barely modified status quo.
The conventional wisdom now holds that OpenAI’s nonprofit board overplayed its hand, communicated its decision and motives terribly, and disqualified itself from governing the most important company of its generation. And I basically agree with all that, as I wrote here last week. Whatever reasons the board may have had for declining to outline with any specificity why they fired the CEO who had led their organization to great success, in the end their silence doomed them.
And yet: for everything they failed at, the board does appear to have succeeded in putting a new governance structure into place. Altman and Brockman are no longer on the board; Quora CEO Adam D’Angelo, who voted to fire Altman, remains as well, and will help to fill out the rest of its membership. Bret Taylor, the former Twitter board chair, and Larry Summers, the former US Treasury secretary, will join D’Angelo on the board; together, they will appoint up to six new members. In addition, the new board will commission an independent investigation into the events surrounding Altman’s firing.
Corporate investigations vary widely in quality and rigor, and it remains to be seen whom the board will ask to conduct this one. There’s no guarantee that the results of the investigation will be made public, though there’s a case to be made that they should be, at least in part. And whatever the investigation finds, it’s not clear at this point what it would have to find to justify Altman’s removal — especially to the company’s employees, whose near-unanimous support for their CEO is all but unheard of in Silicon Valley.
Still: it feels too early to declare the story over. The soon-to-be-former members of OpenAI’s board will speak eventually: to investigators, the public, or both. Squads of investigative reporters are now digging into Altman’s sprawling web of investments.
It is quite possible that a few years from now, the events of this month will be a footnote in the history of OpenAI’s ascent. But with a new board arriving and an investigation about to begin, I wouldn’t be surprised if this story didn’t have another twist or two in store.
I spent the past several days chatting with sources close to OpenAI and its board. Here are a few items I can report that may bring some additional texture to the past week’s events.
Once the board determined that it had a majority of members willing to fire Altman, it felt a pressure to act quickly. The board appears to have anticipated correctly that Altman would muster enormous resources to prevent his removal from the company. Once it determined it had a majority of members willing to fire Altman, it sought to move before he got wind of the decision.
But in the board’s haste to fire him, it failed to plan for everything that would follow: starting with employees’ utter incredulity at what was happening, and the minimal explanations that would be offered to support the board’s decision. Moving quickly may have meant that the board was able to fire Altman before he could stop it, but it also created the conditions for his return.
Still, Altman got a worse deal on Tuesday than he did the day he was fired. I’m told that Altman’s team originally called for him to be reinstated and every current member of the board to resign. By holding out for a few days, the board did get some concessions: D’Angelo remaining on the board; the outgoing board members getting input on members of the new board; and the independent investigation. It’s not nothing.
Bret Taylor had been considered for OpenAI’s board before. OpenAI had been seeking new board members ever since three people stepped down from it earlier this year; one mystery has been who was under consideration. I can report that one person the board had talked to was Taylor, a highly regarded entrepreneur and board operator who also served a stint as co-CEO of Salesforce. But the board had been unable to come to a consensus on any new members before the firing, I’m told.
The board never received any formal communication about Q*. One of the more intriguing stories about the drama to come out over the past few days concerns Q* (pronounced “Q-star”), an AI model that can solve basic math problems. The Information (which has really done outstanding work on the whole OpenAI story) reported that Q* “raised concerns among some staff that the company didn’t have proper safeguards in place to commercialize such advanced AI models.”
That story followed a report from Reuters that said “several staff researchers wrote a letter to the board of directors warning of a powerful artificial intelligence discovery that they said could threaten humanity.” I can report that the board never received any such letter about Q*.
The board never received the letter that Elon Musk posted, either. Last week a letter purporting to be from OpenAI staffers briefly appeared on GitHub. Like the board’s message in firing Altman, it was notably short on specifics. “Throughout our time at OpenAI, we witnessed a disturbing pattern of deceit and manipulation by Sam Altman and Greg Brockman, driven by their insatiable pursuit of achieving artificial general intelligence,” the letter read.
Musk posted a link to the letter on X, along with the comment “These seem like concerns worth investigating.”
In any case, I’m told, no such letter was ever received by the OpenAI board.
How Will AI Affect the 2024 Election?
Tuesday, November 28, 6–7 p.m. ET
2024 will bring the first presidential election of the generative AI era. As artificial intelligence produces output that is increasingly difficult to distinguish from human-created content, how will voters separate fact from fiction? The Brennan Center for Justice and Georgetown University’s Center for Security and Emerging Technology are convening experts to examine these and other critical questions about how AI might impact election security, voter suppression, election administration, and political advertising and fundraising.
Join the Brennan Center and CSET for this live virtual panel, which will explore what steps the government, the private sector, and nonprofits should take to minimize the possible dangers while harnessing the benefits of these new and powerful tools.
- A district judge ruled that the FTC can continue to revise Meta’s 2020 privacy settlement, blocking a move from Meta to force the issue into court. (Leah Nylen / Bloomberg)
- A newly unredacted lawsuit by 33 state attorneys general alleges that Meta knew of millions of Instagram users under the age of 13, but only disabled a fraction of those accounts and continued to collect data on children. (Natasha Singer / The New York Times)
- A test of Instagram Reel’s algorithm found that accounts following children were served a mix of videos including risqué footage of children and overtly sexual adult videos. (Jeff Horwitz and Katherine Blunt / The Wall Street Journal)
- A group of 18 countries, including the US and UK, have set up an international agreement on how to keep AI safe from bad actors, urging companies to make systems “safe by design”. (Raphael Satter and Diane Bartz / Reuters)
- Amazon was issued a fine of just $7,000 — the maximum penalty — after an employee died in a distribution center in Indiana, underscoring the need for stronger penalties and safety regulations. (Caroline O’Donovan / Washington Post)
- Amazon is reportedly set to get unconditional approval from the European Commission for its $1.4 billion acquisition of robot vacuum maker iRobot. (Foo Yun Chee / Reuters)
- The UK chancellor’s Autumn Statement announced that the government will boost spending for computing power to develop AI and UK-based quantum computers aimed at running operations without errors. (Clive Cookson / Financial Times)
- An analysis found that ads from 86 major advertisers on X were on viral posts spreading misinformation about the Israel-Hamas conflict, with both X and creators sharing in ad revenue. (Jack Brewster, Coalter Palmer and Nikita Vashisth / NewsGuard)
- X could lose up to $75 million in advertising revenue as more advertisers pause their campaigns on the platform after Elon Musk endorsed an antisemitic post. (Ryan Mac and Kate Conger / The New York Times)
- Israel told Musk that his Starlink satellite network will only operate in Gaza if it gets approval from the Israeli Ministry of Communications. Musk visited the country in a goodwill gesture after he was blasted for supporting antisemitic comments on X. (Chloe Cornish / Financial Times)
- Some parents are being mislabeled as child abusers by Google’s AI-powered systems that review YouTube content, leading to unnecessary investigations and financial burdens. (Kashmir Hill / The New York Times)
- A slew of lawsuits against AI companies over copyright infringement, all led by an unlikely new attorney named Matthew Butterick, could shape the future of AI and creative industries. (Kate Knibbs / WIRED)
- The emergence of uncensored AI models and chatbots is sparking questions about balancing safety and freedom in AI. (Mark Gimein / The Atlantic)
- Russia‘s interior ministry has reportedly added Meta spokesperson Andy Stone to its wanted list. A reason was not indicated. (Laura Kayali / POLITICO)
- After Chinese social media companies like Weibo introduced a new rule that required influencers to display their legal names on their profiles, influencers are removing followers or quitting altogether. (Caiwei Chen / Rest of World)
- Reddit is reportedly in talks for a potential initial public offering as soon as the first quarter of 2024. (Amy Or, Ryan Gould, Katie Roof and Gillian Tan / Bloomberg)
- Content moderators for dating apps say there are high workloads, unrealistic targets, and a lack of mental health support, putting their well-being and user safety at risk. (Niamh McIntyre / The Bureau of Investigative Journalism)
- ByteDance is reportedly planning on cutting hundreds of jobs in its gaming department and winding down Nuverse, withdrawing from the gaming industry. (Zheping Huang and Dong Cao / Bloomberg)
- TikTok’s research on the platform’s impact on the music industry found that users are more willing to pay for a streaming service or music product than the average consumer. (Stuart Dredge / Music Ally)
- X preview cards will get headlines again, Musk says, after removing them last month in a bit to discourage people from leaving the site. (Ivan Mehta / TechCrunch)
- Google Bard is now better at understanding YouTube videos, including the ability to ask Bard specific questions related to video content. (C. Scott Brown / Android Authority)
- Google Meet users can now “raise hand” in a meeting by physically raising their hands instead of clicking a button. (Abner Li / 9to5Google)
- Google said that a missing file problem in Drive was caused by the desktop app, and is investigating the problem. (Ben Schoon / 9to5Google)
- Instagram is now letting all users download publicly posted Reels. (Andrew Hutchinson / Social Media Today)
- A look at the effective altruism movement, which set out to encourage better outcomes in philanthropy but has struggled in high-profile roles at FTX and OpenAI. (Robert McMillan and Deepa Seetharaman / The Wall Street Journal)
- Startup Inflection AI unveiled a new AI model that it says outperforms Google’s PaLM Large 2 and Meta’s LLaMA 2, and will soon be integrated into its chatbot Pi. (Alex Konrad / Forbes)
- China’s Douyin has gained popularity among older generations, and has become an outlet of self-expression and connectivity for the elderly. (Lavender Au / WIRED)
Those good posts