What's new at Anthropic: April recap (Opus 4.7, Haiku 4.5, and Claude Code evolves)
In the March recap I wrote that I was expecting longer context for Sonnet and an improved vision model. Anthropic took it in a different direction — they shipped a new flagship and pushed the context window even further than I expected. April was a model-heavy month.
Here's the rundown of what mattered, with my take on each item.
Claude Opus 4.7 — flagship with 1M context
Probably the biggest news of the month. Opus 4.7 got a 1 million token context window. For perspective — that's roughly 750,000 words, several large books, or a sizable monorepo with tests and docs included.
What actually improved over Opus 4:
- 1M context (up from 200k) — entire project, entire book, entire contract at once
- Better long-context reasoning — the model doesn't lose the thread when you feed it 500k tokens
- Coding — more sensitive to project style and conventions, better multi-file refactoring
- Pricing — slightly higher than Opus 4, but competitive at the 1M tier
What does it change in practice? I drop a whole monorepo into Opus and say "find every place that touches the database without a transaction." Instead of having Claude request files via tools, it sees everything at once. Quality reasoning at that context size feels qualitatively different.
Second new use case: long PDFs and contracts. Drop a 200-page tender into the prompt and ask "what are the risks here" — that's now a real workflow.
My take: I only fire up Opus 4.7 for big tasks — architecture review, project audits, multi-file refactors. For daily work I still use Sonnet 4 (waiting to see if a Sonnet 4.5 lands too). The 1M context is amazing, but you pay for it. And 80% of tasks don't need it. Use it when you actually need it — not by default.
Claude Haiku 4.5 — small, fast workhorse
The second major release. Haiku 4.5 showed up as a fast, cheap model for tasks where you don't need Sonnet or Opus. Positioning is clear: sub-second latency, much lower price, "surprisingly good" quality.
Where Haiku 4.5 makes sense:
- Hooks in Claude Code — pre-commit checks, auto-summary commit messages, classification
- Background tasks — content moderation, auto-tagging, email classification
- Sub-agents — when the main model delegates a small specific task
- Batch jobs — bulk processing thousands of records, where Sonnet would be overkill
- Embedded experience — in apps where the user is waiting for a real-time response
My take: I deployed Haiku 4.5 as a "first responder" for classification tasks in one of my scrapers — instead of regex rules, the model decides. Latency is a few hundred milliseconds, the cost per 1000 classifications is pocket change, and quality beats hand-written rules. That's exactly the use case where Haiku 4.5 shines. Don't try it on heavy reasoning — that's what Sonnet and Opus are for.
Claude Code is evolving (a lot)
March's hooks and CLAUDE.md were the start. April's updates pushed Claude Code closer to a full dev environment.
Skills marketplace
Skills became shareable. Instead of everyone writing their own instructions for brainstorming, code review, or TDD, you can install a community or Anthropic skills package. It works like npm — claude skill install superpowers/brainstorming and you're set.
That's not just a terminal utility anymore. That's an ecosystem.
Sub-agents for parallel work
Claude Code can now run sub-agents in parallel. Example: you want to check three independent parts of an app? Instead of sequential "look here, then here, then here," you dispatch three sub-agents at once. Each gets its own context and returns a result to the main agent.
For large tasks this is a game changer. Ten minutes of sequential work becomes three minutes of parallel work.
Plan mode by default
For non-trivial tasks Claude Code automatically switches to plan mode — instead of writing code immediately, it produces a plan and waits for approval. Only then does it execute.
This is a fundamental shift for me. Claude used to jump on the first idea. Now it gives you a plan → you tweak it → you confirm → it executes. Far fewer surprises.
My take: I've started using skills slowly — superpowers:brainstorming and superpowers:writing-plans are excellent for kicking off big features. Sub-agents I use less so far, mainly because it takes time to internalize what's parallelizable. Plan mode calms me down — code stops "unraveling" mid-task.
claude.ai — small touches, but nice
Less dramatic news, but worth mentioning:
- Memory across conversations — Claude remembers context from past chats. You don't have to re-explain what you're working on every time. Opt-in.
- Projects updates — better context management, shared team projects, templates.
- Mobile app polish — faster, more stable, voice mode got a redesign. Finally usable.
My take: Memory is a double-edged sword. The convenience of "Claude remembers I write a Czech AI blog" is great. But sometimes I want a clean slate, and old context sabotages me. I have it on — but with the awareness that I occasionally need to manually delete old "memories."
What I'm expecting in May
Speculation based on what Anthropic has been hinting at:
- Sonnet 4.5 with 1M context — if Opus 4.7 works well, it will logically trickle down. Sonnet at 1M for a reasonable price would cover most daily use cases.
- Vision upgrade — promised in March, hasn't landed yet
- Computer Use 2.0 — Anthropic mentioned better desktop control, I'm waiting for the demo
If we see even just Sonnet 4.5 + 1M, it'll be another great month.
Summary
| News | Impact | |---------|--------| | Opus 4.7 + 1M context | Highest — changes what's possible with context | | Haiku 4.5 | High — fills the Haiku 3.5 gap, price/performance | | Skills marketplace | High — Claude Code as a platform | | Sub-agents | Medium — early days, but big potential | | Plan mode default | Medium — more discipline, less chaos | | Memory in claude.ai | Low — handy, but occasionally annoying |
April was model-heavy. May could be ecosystem-heavy — if Anthropic ships a Sonnet upgrade and Computer Use, this will be the strongest year for Claude since launch.
See you at the May recap.