Anthropic has pushed its Claude Managed Agents closer to all-day digital workers by adding a new pause-and-resume behavior that lets tasks stall, wait, and pick back up later.

The company describes the feature in language that nods to “dreaming,” but the practical shift looks more grounded: agents can now handle work that unfolds over longer stretches instead of burning through a single uninterrupted run. That matters because many useful software tasks depend on waiting for outside events, delayed inputs, or scheduled follow-ups. Reports indicate Anthropic wants its agents to feel less like one-shot tools and more like systems that can manage extended assignments.

The update points to a simple goal: make AI agents useful during the gaps, not just during the burst of action.

Anthropic paired that change with a more immediate concession to users of Claude Code. The company said five-hour usage limits will double for Pro and Max subscribers, a notable increase for customers who rely on the coding product for sustained sessions. That move addresses a practical bottleneck as demand grows for AI coding tools that can stay productive across longer workflows.

Key Facts

  • Anthropic updated Claude Managed Agents with a pause-and-resume capability.
  • The company framed the behavior as a kind of limited “dreaming.”
  • Claude Code five-hour usage limits will double for Pro and Max users.
  • The changes target longer, more continuous AI-assisted work.

The announcement lands in a crowded AI market where companies now compete less on flashy demos and more on whether their systems can actually carry useful work from start to finish. A background waiting state may sound small, but it tackles a real weakness in many agent products: they often fail when a task needs patience, timing, or a return visit. By extending usage limits at the same time, Anthropic also signals that raw access still shapes the user experience as much as model quality does.

What comes next will matter more than the marketing language. If the new agent behavior reliably handles delayed tasks and if higher Claude Code limits reduce friction for paying users, Anthropic could strengthen its position with developers and businesses looking for AI that works beyond a single prompt. The broader test now is whether these tools can turn longer attention spans into consistently better results.