On May 7, 2026, Anthropic presented three new capabilities for the Claude Managed Agents platform at its second annual Code with Claude Developer Conference in San Francisco. The most significant of them — "dreaming" — is an asynchronous mechanism that lets AI agents analyze records of past sessions and reorganize memory to extract recurring patterns. At the same conference, CEO Dario Amodei disclosed that Claude's annualized revenue and usage growth in Q1 2026 came in at 80x against an internal plan built for 10x growth.
Key takeaways
- The "dreaming" feature is available in research preview and requires a separate beta header dreaming-2026-04-21
- Outcomes and multi-agent orchestration moved from research preview to public beta for all Claude developers
- Harvey (legal AI company) reported a 6x improvement in task completion rates after deploying dreaming
- Wisedocs cut medical document review time by 50% using outcomes
- Anthropic doubled the five-hour usage limits for Pro, Max, Team, and Enterprise plans
What dreaming is — and what it is not
"Dreaming" is not a mechanism for learning via model weight updates. According to the official Claude Managed Agents documentation, a dream is an asynchronous job that takes an existing memory store and optionally up to 100 past session transcripts as inputs. The output is a new, reorganized memory store: duplicates merged, contradicted entries replaced with the latest value, and new insights surfaced across all input sessions simultaneously.
Critically, the input memory store is never modified. A developer can review the output and discard it without consequence. The output memory store is an ordinary workspace resource that can be attached to future sessions. Dreaming is billed at standard API token rates for the selected model; during the beta period, claude-opus-4-7 and claude-sonnet-4-6 are supported.
Alex Albert, Head of Research Product Management at Anthropic, described the analogy to organizational learning: an agent runs a workflow, and at the end, records a summary of the path from A to B. Dreaming does the same automatically — instead of manually creating skills from experience, the model extracts knowledge for future sessions on its own.
Outcomes and multi-agent orchestration in public beta
Both features, previously in research preview, are now available to all developers. Outcomes lets developers define a success rubric — a structure, a presentation standard, a brand voice, or any other criteria set — and instruct the agent to iterate toward that standard without human intervention. The key architectural element is separation of concerns: a dedicated grader agent evaluates outputs in its own independent context window.
Multi-agent orchestration enables a lead agent to decompose complex tasks into subtasks and delegate each to a specialist agent — with its own model, system prompt, and independent context window. Netflix is already using this mechanism to process logs from hundreds of builds simultaneously.
Growth and compute constraints
The figures Amodei shared at the conference are concrete: API volume on the Claude platform is up nearly 70x year over year. The average developer using Claude Code spends 20 hours per week with the tool. Mercado Libre — Latin America's largest e-commerce platform — has 23,000 engineers running Claude Code and has reviewed more than 500,000 pull requests with human oversight.
This growth required urgent action on the infrastructure side. Anthropic announced a partnership with SpaceX to gain access to the full capacity of the Colossus data center to expand compute availability. The company simultaneously raised API rate limits across all plans.
Why this matters
Dreaming defines a new category in AI agent architecture. Previous platforms offered session memory, conversation history, and tool use — none introduced a mechanism for systematically reviewing one's own operational history to extract patterns spanning multiple sessions. Comparing dreaming with outcomes and multi-agent orchestration reveals a coherent Anthropic strategy: rather than racing purely on raw model capability, the company is building a production-reliability layer — verification, learning, and scalability without human intervention.
For enterprise customers, this means a new kind of purchasing argument: not "which model is most intelligent?" but "which platform automatically improves while working?" The benchmark data — Harvey's 6x task completion gain, Wisedocs' 50% document review speedup — are the first signals from real deployments, though both companies are Anthropic customers and do not constitute independent verification.
The 80x annualized growth against a 10x plan is a signal that the company is losing control of the supply side. The compute constraints Amodei described openly mean the coming months are primarily an infrastructure race, not just an algorithmic one.
What's next
- Dreaming remains in research preview and requires a request for access via an Anthropic form — no announced date for general availability
- The SpaceX partnership (Colossus) is intended to expand compute availability in response to the constraints Amodei described at the May 7, 2026 conference
- Amodei predicts 2026 will see the first billion-dollar company run by a single person using AI agents — a concrete, measurable verification point for Anthropic's strategy





