Mira Murati brought the OpenAI power struggle into sharper focus when she told the court she could not trust Sam Altman’s words about AI safety.

In a video deposition shown Wednesday in the Musk v. Altman trial, Murati, OpenAI’s former chief technology officer, said Altman lied to her about the safety standards tied to a new AI model, according to reports. She testified under oath that Altman said OpenAI’s legal department had determined the model did not trigger a key internal review threshold. Her account strikes at a central issue in the case: whether OpenAI’s leadership handled powerful AI systems with the caution it promised.

Murati’s testimony turns an internal trust dispute into courtroom evidence about how OpenAI described the risks of a new model.

The significance goes beyond one disputed conversation. Murati stood near the center of OpenAI’s technical decision-making, and her testimony suggests deeper tension between product ambition and safety oversight. Reports indicate the deposition adds fresh weight to arguments that top executives may have presented internal judgments in ways that shaped deployment decisions and staff confidence.

Key Facts

  • Mira Murati testified under oath in a video deposition shown Wednesday.
  • She said she could not trust Sam Altman’s statements about AI safety standards.
  • Murati said Altman falsely claimed OpenAI’s legal team cleared a new model under an internal threshold.
  • The testimony surfaced during the ongoing Musk v. Altman trial.

The courtroom clash also lands at a moment when public trust in AI companies depends on more than technical progress. Investors, regulators, and users want proof that internal safeguards mean something when deadlines tighten and competition accelerates. Murati’s account, if it holds up under scrutiny, could deepen questions about who inside major AI labs makes final calls on risk and how those decisions get explained.

What comes next matters well beyond this trial. The case will keep testing OpenAI’s internal governance and the credibility of its leaders, while Murati’s testimony may shape how courts, regulators, and employees judge safety claims in the AI industry. For a sector racing to build ever more capable systems, the dispute points to a harder truth: trust inside the lab can become the story outside it.