![]()
Anthropic just released Claude Opus 4.7 on April 16, 2026—and while most headlines are focused on its coding gains, the real story is buried a little deeper. This is the first Claude model to ship with high-resolution vision support up to 3.75 megapixels, which is roughly three times the visual capacity of every Claude model that came before it. That upgrade alone has major implications for enterprise document analysis, computer use, and screenshot processing — and almost nobody is talking about it. Claude Opus 4.7 is also the model Anthropic is openly using as a safety test bed before it dares to release its far more powerful Mythos model to the public.
What makes this Claude Opus 4.7 launch stand out even further is Anthropic’s rare moment of transparency. The company publicly conceded that the new Opus model does not match the performance of Mythos—an advanced system currently locked behind a restricted program called Project Glasswing. Anthropic essentially said, “Here’s our best model you can actually use today, and we acknowledge something better exists but isn’t ready for the world yet.” That kind of candor is unusual in a space where every AI lab typically claims its latest release is the most powerful thing ever trained.
Claude Opus 4.7 Coding Performance — A Real Step Forward
The headline upgrade in Claude Opus 4.7 is undeniably coding performance, and the numbers back it up in a meaningful way. On SWE-bench Pro, the benchmark that tests a model’s ability to resolve real-world software issues from open-source repositories, Opus 4.7 scores 64.3%, up from 53.4% on Opus 4.6 — and well ahead of GPT-5.4 at 57.7% and Gemini 3.1 Pro. CursorBench, which measures autonomous coding in the popular AI code editor, shows a similar jump: 70% for Opus 4.7, up from 58% on Opus 4.6. Those are not marginal improvements. That is a meaningful gap that developers building production pipelines will actually feel.
I’ve been following this space for a while, and honestly, what impressed me most here is the self-verification behavior. Anthropic says Opus 4.7 handles complex, long-running tasks with rigor and consistency, pays precise attention to instructions, and devises ways to verify its own outputs before reporting back. Users in early access report being able to hand off their hardest coding work — the kind that previously needed close supervision — with real confidence. For teams running multi-step agentic workflows, that combination of reliability and self-checking is more valuable than a few benchmark points.
Early feedback from enterprise users is telling. One fintech platform testing Opus 4.7 noted that the model catches its own logical faults during the planning phase and accelerates execution far beyond previous Claude models. That is the kind of real-world signal that matters — a specific, grounded observation from production use, not a lab environment. On a 93-task coding benchmark run by an independent team, Claude Opus 4.7 lifted task resolution by 13% over Opus 4.6, including four tasks that neither Opus 4.6 nor Sonnet 4.6 could solve at all.
Claude Opus 4.7 Vision — The Upgrade Nobody Is Talking About
This is the part of the story that most articles missed entirely. Claude Opus 4.7 is the first model in Anthropic’s lineup with high-resolution image support. Maximum image resolution jumped from 1,568 pixels on the long edge — roughly 1.15 megapixels — to 2,576 pixels, or about 3.75 megapixels. That is over three times the visual capacity of all previous Claude models combined. In practical terms, this means Opus 4.7 can now properly read fine print in scanned legal documents, process detailed technical drawings, and analyze financial statements that previous versions would partially hallucinate.
After looking into this more closely, I can tell you that the pixel-mapping fix is equally significant. The model’s coordinates now map 1:1 with actual pixels, eliminating the scale-factor math that was previously required when using Claude for computer-use tasks. Anyone who has wrestled with Claude’s computer use feature before knows exactly how frustrating imprecise pixel coordinates used to be—it was a hidden friction point that caused real problems in automated pipelines. Opus 4.7 appears to solve it cleanly.
Memory, Instruction Following, and the High Effort Level
Anthropic also highlights meaningful improvements to file system-based memory in Claude Opus 4.7. The model remembers important notes across long, multi-session work and uses them to carry forward context into new tasks that, as a result, need significantly less setup each time. For developers building agents that run across multiple sessions, this is a practical quality-of-life upgrade that directly reduces friction in daily use. Having to re-explain context at the start of every conversation has been one of the most persistent pain points in agentic AI workflows.
The new “high effort” level—sitting between “high” and “max”—gives users finer control over the tradeoff between reasoning depth and latency. This is a new control option. Anthropic is introducing specifically hard problems where you want more thinking but don’t need to burn maximum compute. Anthropic is also testing “task budgets,” a developer feature that gives teams more explicit control over how Claude reasons on longer tasks. The tradeoff is real: the same input can map to roughly 1.0 to 1.35 times more tokens depending on content type, and Opus 4.7 thinks more at higher effort levels, particularly on later turns in agentic settings.
When I first heard about the x-high effort level, I didn’t think much of it, but after digging in, I changed my mind completely. The ability to dial reasoning depth per task, rather than committing to max compute across the board is exactly the kind of granular control that production teams have been asking for since Extended Thinking launched.
Adaptive Thinking and What It Means for Developers
Claude Opus 4.7 also ships with adaptive thinking—a system that lets Claude dynamically allocate thinking token budgets based on the complexity of each request. Rather than using a flat reasoning budget regardless of how hard a task is, the model allocates more or less thought depending on what the problem actually demands. The practical benefit is that simpler tasks stay fast and cheap, while complex tasks get the depth they need. According to Box’s head of AI, Opus 4.7 demonstrates significant efficiency gains while preserving the performance of Opus 4.6—a combination that should matter to any team watching API costs.
Safety Guardrails—Anthropic Is Playing a Long Game
What most articles skimmed past is the broader strategic significance of Claude Opus 4.7’s safety architecture. This isn’t just an upgrade release. It is a deliberate, public safety experiment that Anthropic needs to run before it can responsibly release anything more powerful than Opus 4.7. Anthropic is releasing Opus 4.7 with new safeguards that automatically detect and block requests indicating prohibited or high-risk cybersecurity uses — and everything they learn from deploying those guardrails in the real world is feeding directly into their plans for a broader release of Mythos-class models.
This is the forward implication that most coverage completely ignored. Every interaction with Opus 4.7’s cybersecurity safeguards is generating data that Anthropic will use to shape how, when, and to whom Mythos eventually gets released. Cybersecurity professionals who want to use Opus 4.7 for legitimate purposes—vulnerability research, penetration testing, red teaming—can apply through Anthropic’s new Cyber Verification Program. Industry insiders hint that Mythos could see broader availability by late 2026 if the Opus 4.7 safety rollout goes well. Sources suggest that Apple, Google, and Microsoft are currently among the select partners with Mythos access under Project Glasswing.
It is rumored that Anthropic is already in early IPO discussions at a valuation near $800 billion, making these safety decisions carry real weight far beyond academic AI ethics. The company is running at a $30 billion annualized revenue rate—a buried stat that puts the pressure on Opus 4.7 to actually justify that commercial momentum. According to reports, Claude Code alone hit $2.5 billion in annualized revenue by February 2026, cementing coding as Anthropic’s most critical commercial battleground.
Pricing and Where to Access Claude Opus 4.7
Pricing for Claude Opus 4.7 remains identical to Opus 4.6: $5 per million input tokens and $25 per million output tokens. The 1M token context window is included at standard pricing with no long-context premium. Prompt caching offers up to 90% cost savings, and the Batch API provides a 50% discount on both input and output. Claude Opus 4.7 is available now on Claude Pro, Max, Team, and Enterprise plans, and through the API on Amazon Bedrock, Google Cloud’s Vertex AI, and Microsoft Foundry.
The model also ships with a new /ultrareview command in Claude Code, which runs a dedicated review session that reads through your code changes and flags issues a careful reviewer would catch. Anthropic has also shifted its default reasoning level in Claude Code and added an auto mode for Max plan subscribers, not just Teams and Enterprise customers. For developers migrating from Opus 4.6, the main breaking changes involve temperature, top_p, and top_k parameters — those calls will return 400 errors on Opus 4.7, and the migration path is to switch to adaptive thinking and use prompting to control output behavior instead.
Claude Opus 4.7 is the most capable AI model Anthropic has made publicly available to date. It out-performs GPT-5.4 and Gemini 3.1 Pro on the benchmarks developers actually run in production, brings a long-overdue vision upgrade that will matter far more than most people realize, and carries real strategic significance as the safety proving ground for whatever Anthropic plans to release next. Watching how Claude Opus 4.7’s safeguards perform in the wild over the next few months may turn out to be the most important story in AI this year — more important, even, than the model’s benchmark scores.