In sworn testimony, Elon Musk seemed to acknowledge a practice that cuts to the heart of the AI race: xAI may have used OpenAI’s models to help train its own.

The disclosure, as reports describe it, came while Musk answered questions under oath and argued that AI labs commonly rely on competitors’ systems as part of development. That framing matters. It suggests Musk did not present the issue as a shocking exception, but as standard behavior in a fiercely contested industry where companies push to improve models fast and often test against what rivals have already built.

If Musk’s testimony reflects standard practice, the AI industry faces a harder question than whether one company crossed a line — it must decide whether the line exists at all.

The apparent admission lands in a sector already strained by disputes over data, copyright, scraping, and model training methods. AI companies market their systems as proprietary breakthroughs, yet the boundaries around what counts as fair learning, benchmarking, or distillation remain contested. Reports indicate Musk’s comments touched that nerve directly by suggesting competitors’ outputs can play a role in building new systems.

Key Facts

  • Elon Musk reportedly made the remarks while answering questions under oath.
  • He argued that using competitors’ models is standard practice among AI labs.
  • The comments appear to suggest xAI used OpenAI models in its own training process.
  • The issue adds to broader debates over AI training data, competition, and rules.

The timing sharpens the stakes. Musk has stood at the center of multiple fights over the future of artificial intelligence, including disputes about openness, safety, and commercial control. Any suggestion that xAI benefited from OpenAI’s work invites scrutiny not just of one company’s methods, but of the norms governing an industry that moves faster than the rules built to contain it.

What happens next depends on how regulators, courts, and AI companies respond to the gap between public rhetoric and private practice. If more evidence emerges that model-on-model training has become routine, pressure will build for clearer standards on what labs can borrow, replicate, or distill. That matters because the next phase of the AI boom may hinge less on who can build the smartest model from scratch and more on who can legally and credibly learn from everyone else.