Inside the Musk Trial: How OpenAI's Board Tried to Save the Company by Ousting Sam Altman
During testimony in Elon Musk's lawsuit against OpenAI, a former top scientist revealed that the board's shocking decision to remove Sam Altman in November 2023 was a last-ditch effort to save the company from abandoning its safety mission. The five-day ouster, which ended with Altman's reinstatement, has become a central flashpoint in the legal battle over whether OpenAI betrayed its founding promise to develop artificial intelligence safely and for the benefit of humanity.
What Really Happened During OpenAI's "Blip"?
In November 2023, OpenAI's board removed Altman, citing concerns that he had not been "consistently candid in his communications" and citing a breakdown of trust. The dramatic five-day period, which employees nicknamed "The Blip," ended when Altman was reinstated after Microsoft CEO Satya Nadella publicly offered to hire him and other OpenAI executives to lead a new AI team at Microsoft.
Ilya Sutskever, a top OpenAI computer scientist who orchestrated the ouster before departing the company in 2024, testified that the decision was driven by genuine safety concerns. Sutskever had compiled a detailed record documenting what he characterized as Altman's "consistent pattern of lying," including misrepresenting facts, safety protocols, and company information to the board and executives.
"I felt a great deal of ownership of OpenAI. I felt like I created this company. I simply cared for it, and I didn't want it to be destroyed," Sutskever stated.
Ilya Sutskever, Computer Scientist at OpenAI
Sutskever described the removal as a "Hail Mary" to rescue OpenAI from an environment that had become "not conducive" to the technology's safety. He explained that he had worked on a team focused on long-term AI risks, developing what the company called "super alignment" research to ensure that increasingly powerful AI systems could be controlled safely.
How Did Microsoft's Massive Investment Shape OpenAI's Direction?
At the heart of Musk's lawsuit is the claim that OpenAI's leaders breached their duty to the nonprofit's mission by building a for-profit company on top of it. Microsoft, which has invested $13 billion in OpenAI since 2019, plays a central role in this dispute. During testimony, Musk's attorney Steven Molo questioned Microsoft CEO Satya Nadella about the company's financial motivations and its influence over OpenAI's decisions.
Nadella acknowledged that Microsoft expected a return of approximately $92 billion on its $13 billion investment "if it works out." When pressed about his fiduciary duty to maximize profit, Nadella pointed to text messages between himself and Altman that appeared to show him pushing for an earlier rollout of ChatGPT's paid version. In one exchange, Nadella wrote "When chatGPT paid?" and followed up with "The sooner the better" when Altman explained there wasn't enough computing power for a good consumer experience.
Nadella defended Microsoft's investment by arguing that a larger overall market would benefit the nonprofit as well. However, Molo highlighted a troubling detail: for a period of time, OpenAI's nonprofit arm had no employees at all, raising questions about whether the nonprofit structure was merely a shell.
Steps to Understanding the Nonprofit vs. For-Profit Tension
- The Original Mission: OpenAI was founded as a nonprofit with the stated goal of developing artificial intelligence safely and for the benefit of humanity, not to maximize shareholder returns.
- The For-Profit Layer: OpenAI created a for-profit subsidiary to attract investment and commercialize its technology, with Microsoft becoming the largest financial backer at $13 billion since 2019.
- The Governance Problem: Musk's lawsuit argues that the for-profit operations have completely overshadowed the nonprofit mission, with the nonprofit at times having zero employees while the for-profit company pursued aggressive commercialization strategies.
During his testimony, Nadella explained that during "The Blip," he was primarily focused on ensuring continuity for Microsoft's customers rather than understanding why Altman had been removed. "It goes back to me wanting to communicate to customers that they can count on us," he said. "Come Monday, that doesn't just disappear." Molo pressed him on whether the board should have issued a public statement explaining the firing, but Nadella maintained his focus was on customer reassurance.
What Happened to OpenAI's Safety Team After Altman's Reinstatement?
One of the most significant revelations from Sutskever's testimony involves what happened to OpenAI's safety research after Altman returned. Sutskever had led a team dedicated to "super alignment," which focused on researching how to ensure that increasingly powerful AI systems remain aligned with human values and can be controlled safely. This team was disbanded just days after Sutskever departed the company in May 2024.
Sutskever explained the mission of this work: "The goal of the super alignment is to do the research in advance, such that humanity will have the technological means to make it controlled and safe." The dissolution of this team shortly after his departure suggests that the safety concerns that motivated Altman's removal may not have been adequately addressed.
Sutskever
What Does This Mean for OpenAI's Future?
The trial testimony reveals fundamental tensions within OpenAI that extend beyond the legal dispute with Musk. The company faces questions about whether its nonprofit structure is meaningful or merely ceremonial, whether its leadership prioritizes safety over commercialization, and whether Microsoft's massive investment has effectively taken control of the company's strategic direction.
Meanwhile, OpenAI is navigating other significant business developments. The company and Microsoft recently renegotiated their contract, capping total revenue-sharing payments at $38 billion. This move allows OpenAI to pursue new partnerships with companies like Amazon and Google, potentially strengthening the company's negotiating position as it works toward a public offering that some executives believe could happen by the end of 2026.
The trial continues to expose the complex relationship between OpenAI's founding mission and its current business model, with Sutskever's testimony providing a rare inside look at the safety concerns that drove one of the most dramatic boardroom confrontations in recent tech history.