A Medicare experiment built to root out waste now faces a harsher test: whether an AI-powered approval system blocks patients from getting care on time.
Reports indicate a Centers for Medicare & Medicaid Services pilot in six states uses artificial intelligence to review prior authorizations, a process insurers and government programs use to approve certain treatments, tests, or services before they happen. CMS has framed the effort as a way to cut "waste, fraud and abuse." Critics, however, argue that the system adds friction at the worst possible moment, when patients need quick decisions and doctors need clear answers.
Key Facts
- CMS has launched a Medicare pilot in six states.
- The program uses AI in prior authorization reviews.
- Officials say the goal is to reduce waste, fraud, and abuse.
- Reports suggest patients are facing delays and possible harm.
The tension here goes beyond one program. Prior authorization already frustrates many patients and clinicians because it can slow treatment and create extra paperwork. Adding AI raises the stakes. Supporters may see automation as a faster, more consistent filter. Opponents see a tool that can scale denial, confusion, or delay if the system flags legitimate care too aggressively or if human oversight falls short.
The central question is no longer whether AI can process authorizations faster, but whether it can do so without turning needed care into a waiting game.
This matters because Medicare serves older adults and people with disabilities, groups that often cannot absorb long waits or bureaucratic mistakes. Even a short delay can carry real consequences when treatment windows are narrow or chronic conditions worsen without intervention. The controversy also lands at a moment when health systems, insurers, and regulators are racing to deploy AI tools faster than many patients can understand how those tools affect life-and-death decisions.
What happens next will likely shape more than this six-state pilot. CMS could face pressure to explain how the technology works, how often humans override it, and what safeguards protect patients when the system gets it wrong. If reports of delayed care keep mounting, this program may become an early warning for the broader use of AI in American healthcare — and a reminder that efficiency means little if patients pay the price.