Mira Murati brought the battle over OpenAI’s leadership and safety culture into sharp focus when she told the court she could not trust Sam Altman’s words.

In a video deposition shown Wednesday in the Musk v. Altman trial, OpenAI’s former chief technology officer said Altman misled her about the safety standards tied to a new AI model. According to the court testimony described in reports, Murati said Altman told her that OpenAI’s legal department had determined the model did not trigger a particular internal review threshold. Her account places an internal safety dispute at the center of a case that already carries major consequences for one of the world’s most influential AI companies.

Murati’s testimony shifts the argument from corporate strategy to something more basic: whether senior leaders told the truth about AI safety decisions.

The testimony matters because Murati sat near the top of OpenAI’s product and research operation. When a former CTO says she could not rely on the CEO’s statements, the issue extends beyond a single disagreement. It raises deeper questions about how safety judgments got made, who challenged them, and whether internal checks held up when commercial pressure intensified. Reports indicate the deposition gave the court a rare look at how OpenAI’s top executives handled risk behind closed doors.

Key Facts

  • Mira Murati testified under oath in a video deposition shown Wednesday.
  • She said she could not trust Sam Altman’s statements about AI safety standards.
  • Her testimony came during the ongoing Musk v. Altman trial.
  • Reports say the dispute involved whether a new AI model met a threshold for internal review.

The courtroom clash also lands at a moment when AI companies face growing scrutiny over transparency, governance, and product safety. OpenAI helped define the current AI boom, so claims about internal misrepresentation carry weight far beyond one legal fight. Rivals, regulators, and customers all want to know whether safety standards operated as firm guardrails or flexible talking points. Murati’s account does not settle that question on its own, but it gives critics a concrete episode to examine.

What comes next depends on how the court weighs the testimony alongside the rest of the evidence and how OpenAI answers the broader concerns it revives. The trial could shape public understanding of who controlled key decisions inside the company and how seriously leadership treated safety warnings. For an industry racing to release ever more powerful systems, that matters now because trust in AI will depend not just on what models can do, but on whether the people running them tell the truth about the risks.