OpenAI is putting its newest cybersecurity tool behind a narrow gate, offering GPT-5.5 Cyber first only to what it calls critical cyber defenders.

The decision, first outlined in reports on the rollout, marks a controlled launch for a system built for cybersecurity testing. That alone would make news in a sector where powerful AI tools can help defenders move faster. But the timing gives the move extra force: OpenAI now appears to embrace the same logic of limited access that it recently challenged when criticizing Anthropic for restricting Mythos.

Open access sounds principled until a powerful security tool can sharpen both defense and attack.

OpenAI has not, based on the signal provided here, announced a broad public release. Instead, it plans to start with a tightly defined set of users whose work centers on defending critical systems. That framing matters. Cyber tools rarely stay confined to their intended use, and companies building them increasingly face a blunt choice: move fast and risk misuse, or slow down and accept accusations of inconsistency, caution, or both.

Key Facts

  • OpenAI plans to roll out GPT-5.5 Cyber in a limited release.
  • The tool will go first to critical cyber defenders.
  • The product is described as a cybersecurity testing tool.
  • The move follows criticism of Anthropic for limiting access to Mythos.

That tension now sits at the center of the AI security race. Companies want to show they can build useful systems for high-stakes environments, but they also know the same models can lower the barrier to harmful behavior if released too widely. Reports indicate OpenAI is trying to thread that needle by limiting who gets the tool first, even if that undercuts a more open posture it seemed to favor in recent debate.

What happens next will shape more than one product launch. If GPT-5.5 Cyber proves useful in the hands of trusted defenders, OpenAI may gain a stronger case for phased access to other sensitive systems. If critics seize on the apparent reversal, the company may have to explain where it draws the line between openness and restraint. Either way, this rollout signals a broader shift in AI: when tools touch critical security work, access itself becomes the story.