OpenAI has opened a new front in the fight over account security, rolling out an Advanced Account Security mode for people who fear their ChatGPT or Codex accounts could draw phishing attacks.

The move lands at a moment when AI accounts carry more weight than a login and password alone. For some users, these accounts can hold sensitive conversations, development workflows, and access to high-value tools, which makes them attractive targets. OpenAI’s framing signals a clear concern: some users face elevated risk, and standard protections may not feel sufficient.

OpenAI’s new security push reflects a broader reality: as AI accounts become more valuable, they also become more vulnerable.

Reports indicate the feature targets people who believe they may face account-focused attacks, especially phishing attempts designed to steal credentials or trick users into handing over access. OpenAI has not, in the information provided here, detailed every technical element of the rollout. But the message behind the launch comes through sharply: the company wants users with higher exposure to adopt stronger defenses before an attack succeeds, not after.

Key Facts

  • OpenAI is rolling out Advanced Account Security for certain users.
  • The feature applies to ChatGPT and Codex accounts.
  • The stated concern centers on phishing attacks and targeted account compromise.
  • The rollout focuses on users who believe their accounts may face elevated risk.

The decision also puts OpenAI in step with a wider shift across the tech industry. Companies increasingly treat account protection as a frontline product issue, not a buried settings-page option. That matters because phishing rarely depends on breaking software; it often exploits human trust, urgency, and confusion. A stronger security mode can help, but only if users understand why the risk exists and when to turn those protections on.

What happens next will matter far beyond one product setting. If OpenAI expands the feature, explains it clearly, and nudges the right users to adopt it, the company could reduce one of the most common paths into sensitive accounts. As AI tools become more embedded in work and daily life, the security choices around them will shape how much trust users place in the entire ecosystem.