The New Yorker's Bombshell: Inside OpenAI's Alleged Shift From Safety to Profits Under Sam Altman

A detailed investigation by The New Yorker has raised serious questions about whether OpenAI abandoned its safety-first mission as it scaled commercially, with internal documents and testimony from former leaders suggesting CEO Sam Altman played a central role in the shift. Published on April 6, 2026, the report cites previously undisclosed memos from former chief scientist Ilya Sutskever and over 200 pages of notes from ex-safety lead Dario Amodei, painting a picture of a company that gradually sidelined commitments to mitigate existential risks from artificial intelligence as commercial ambitions grew .

What Evidence Does the Investigation Present Against Sam Altman?

The New Yorker investigation draws on substantial internal documentation to support its claims. Sutskever reportedly compiled over 70 pages of material, including HR records, Slack messages, and photos taken on personal devices, which he shared with board members using disappearing messages . One of his memos began with a list of concerns about Altman, with the first allegation focusing on a pattern of dishonesty. Amodei's notes cited similar issues about leadership concerns, asserting that the company's problems were directly linked to Altman himself .

One particularly striking allegation involves misrepresentation of safety approvals. According to the report, Altman told the board that safety features in GPT-4 had been approved by a safety panel. However, when a board member requested documentation, it was discovered that the most controversial features had not actually gone through the expected review process . Additionally, the investigation notes that Microsoft released an early ChatGPT version in India without completing a required safety review, a fact that Altman allegedly never mentioned to the board .

How Did OpenAI's Safety Mission Deteriorate Over Time?

The investigation reveals a systematic dismantling of OpenAI's original safety-focused structure. The company was originally designed to ensure accountability to humanity rather than to shareholders, but this structure allegedly weakened over time as the firm transitioned to a for-profit model . Many safety-focused teams were reportedly dissolved and scaled back as commercial ambitions took precedence.

A critical example involves the superalignment team, which the company had pledged to dedicate a major chunk of its computing power to supporting. However, the investigation mentions that several people involved in the team received only a small portion of what was promised and often relied on outdated systems . Most troublingly, the team was eventually dissolved before completing its task, suggesting that safety research was deprioritized in favor of product development and revenue generation.

  • Original Structure: OpenAI was designed to prioritize accountability to humanity over shareholder interests, but this foundational principle allegedly eroded as the company pursued commercial expansion.
  • Safety Team Dissolution: The superalignment team, tasked with ensuring AI safety, received insufficient computing resources and was eventually dissolved before completing its mission.
  • Governance Weakening: Safety-focused teams were reportedly scaled back or eliminated as the company transitioned from nonprofit to for-profit operations.
  • Review Process Bypassing: Critical safety reviews were allegedly skipped for new features, with board members discovering that controversial GPT-4 features had not undergone expected approval processes.

What Role Did Altman Play in OpenAI's Governance After His Removal?

The investigation also details events following Altman's temporary removal from his position. After he was removed, his allies allegedly tried to influence internal opinion, while investors attempted to shape funding decisions that could depend on his return . Most significantly, Altman was reportedly involved in discussions to reshape OpenAI's board, including proposals for who should oversee an independent review into his own conduct . This raises questions about the independence and credibility of any internal investigation into his leadership.

What Are the Broader Implications for OpenAI's Future Direction?

The investigation claims that OpenAI is moving aggressively toward commercial expansion, including discussions about a possible public listing that could value the company at extremely high levels. The company is also reportedly growing its investment in government-related projects, including defense technologies and surveillance initiatives . This expansion into sensitive areas, combined with the alleged weakening of safety oversight, raises concerns about whether adequate safeguards remain in place as the technology becomes more powerful and more widely deployed.

Several accounts in the report describe Altman as persuasive in personal interactions, but also raise concerns about transparency in decision-making processes . This characterization suggests that while Altman may be effective at building support for his vision, the mechanisms for accountability and oversight may have been compromised in the process.

The New Yorker investigation represents one of the most detailed examinations of OpenAI's internal operations and leadership decisions to date. Whether the company's board and investors will take action based on these findings remains to be seen, but the report clearly establishes that questions about the balance between safety and profit at one of the world's most influential AI companies are far from settled.