Anthropic’s April 7, 2026 announcement of Project Glasswing is one of the clearest signals yet that frontier AI may be moving into a different political category.
The company says Claude Mythos Preview is strong enough at finding and exploiting software vulnerabilities that it does
not plan to make the model generally available. Instead, it is being released to a gated set of large technology firms,
critical infrastructure organisations, and security partners for defensive use.
On one level, that is reasonable.
If a lab genuinely believes it has a model that can autonomously surface zero days at a level beyond almost all human experts, some form of controlled release is hard to argue against. The offensive misuse case is obvious.
But there is a bigger issue underneath it.
Once access to the best models is decided by trust tiers, government relationships, security clearance, or corporate scale, we are no longer just talking about a product launch. We are talking about controlled access to intelligence itself.
That is a very different kind of power.
Why This Is Different From Normal Export Controls
We already accept controls on strategic physical technologies.
Advanced GPUs, semiconductor equipment, cryptographic systems, missile components, and dual use industrial machinery are all treated as things that can shift military and economic power. States already decide who gets them, under what conditions, and in which countries.
What is different about restricting frontier AI is that the scarce thing is not a physical object. It is a general purpose reasoning system.
That matters because intelligence is a force multiplier for nearly everything else.
Give a country better missiles and you improve one class of capability. Give a country better intelligence and you improve research, engineering, planning, and institutional decision making all at once.
Restricting access to that is closer to restricting who gets accelerated cognition than who gets a specific tool.
The Munitions Slope Is Real
This is not a new pattern. The logic runs like this:
- A lab says a model is too dangerous for broad release.
- Access is limited to trusted actors.
- Governments get involved because the capability has national security relevance.
- Export regimes emerge around model weights, inference access, chips, data centre buildout, or all four.
- Access becomes a strategic privilege rather than a general technology diffusion story.
We are already partway down that path with compute.
The United States has spent years using export controls to govern access to advanced semiconductors because they affect AI capability and military power. It would be strange to think the same governments will treat the models themselves as politically neutral once those models become even more strategically useful than the chips.
The question is not whether states will be tempted to govern frontier intelligence like a strategic asset.
They obviously will.
The real question is whether societies will let that evolve into permanent private and geopolitical concentration.
The Compounding Advantage Is The Real Story
The biggest consequence of restricted model access is not prestige. It is compounding advantage.
If only a small number of firms, labs, governments, and approved institutions can use the strongest systems, they get to:
- learn faster
- automate more
- design better systems
- discover vulnerabilities earlier
- compress more work into fewer people
- and reinvest the gains into even more compute, talent, and influence
That creates a feedback loop.
The organisations with access do not merely gain a temporary productivity boost. They improve their rate of improvement. That is a much more powerful advantage than a single good quarter or a slightly better product cycle.
This is why controlled access to intelligence is so politically sensitive. The winners do not just pull ahead once. They widen the gap every month the restriction stays in place.
What About Open Models?
The obvious counterargument is open weight models. Llama, Mistral, DeepSeek, and others are freely available and increasingly capable. If anyone can download and run a strong model, does gated access to the frontier really matter?
It matters because the gap between open and frontier is not closing. It is widening at the top end. Open models are excellent for a broad range of tasks, but the capabilities that trigger security concerns, the kind Anthropic is restricting here, are precisely the capabilities open models do not yet match. And if they did, they would likely face their own restrictions.
There is also a structural issue. Multiple frontier labs exist, each with different release philosophies, but competition between them has not produced a race to openness. The opposite has happened: as capabilities increase, the major labs have converged toward more cautious release. Competition does not solve the access problem when every serious competitor reaches the same conclusion about risk.
Open models are a vital counterweight to concentration, and their continued development matters enormously. But they are not a substitute for frontier access, and they do not invalidate the concern.
What Happens Inside Countries
Inside a country, restricted frontier models would deepen class and institutional divides.
Large firms would get better intelligence before small firms. Elite universities before ordinary schools. Defence contractors before local councils. Well funded hospitals before ordinary clinics. Big law firms before citizens trying to navigate the state.
That means the most capable tools for learning, analysis, legal reasoning, scientific assistance, software development, and cyber defence would arrive first where money, influence, and compliance capacity are already concentrated.
The result would not just be inequality of income. It would be inequality of cognitive leverage.
That is a more dangerous form of inequality because it spills into everything else:
- who can start companies
- who can defend their systems
- who can move up the learning curve quickly
- who can navigate bureaucracy
- who can shape public narratives
- and who can compete with already dominant institutions
There is also a labour market effect that is easy to miss.
If the best models are available only inside a small set of elite organisations, then the people inside those organisations will compound their skills faster, while everyone else trains on weaker tools. Over time, that turns AI access into a new kind of educational sorting system.
In that world, individual merit becomes inseparable from institutional access.
What Happens Between Countries
Between countries, the stakes are even larger.
Anthropic’s own safety roadmap explicitly talks about AI systems accelerating work in areas that could affect international security and “the global balance of power”. That is the correct frame.
If frontier intelligence is tightly controlled by a handful of American firms and made available mainly to approved partners, then model access starts to look like alliance infrastructure.
Countries with access gain:
- faster scientific research
- stronger cyber defence and potentially stronger cyber offence
- better military planning and logistics
- more productive software and knowledge sectors
- better state capacity through AI assisted administration
Countries without access do not just miss out on another app.
They risk becoming permanently downstream from the states and firms that do control advanced intelligence. They can buy products built with it, rent services powered by it, or align themselves politically to gain partial access, but they do not fully own the capability.
That is a recipe for dependency.
The likely result is not a clean world split between “AI haves” and “AI have nots”. It is something more layered:
- Core countries with direct frontier access: the United States, and whichever allies host or co-develop the leading labs.
- Aligned countries with licensed access: think the Five Eyes, parts of the EU, Japan, South Korea, countries that trade diplomatic alignment for a seat at the table.
- Everyone else relying on weaker public models or foreign platforms: most of the Global South, non-aligned states, and any country that falls on the wrong side of an export review.
Once that structure exists, it will shape trade, diplomacy, defence partnerships, and industrial policy for years.
The Security Case Is Still Real
None of this means Anthropic is necessarily wrong to gate Mythos.
If the model really can autonomously identify and chain together critical vulnerabilities across major operating systems, then releasing it widely tomorrow would be reckless.
There is a serious difference between “open access is good” and “every dangerous capability should be public on day one”.
The problem is what happens after the emergency logic.
Temporary restriction for a specific, auditable danger is one thing. A permanent regime in which the strongest intelligence systems are available only to a small club of corporations, security cleared operators, and favoured states is something else entirely.
That second model would not just manage risk. It would formalise a new hierarchy of cognition.
A Better Alternative To Cognitive Feudalism
The term is deliberately strong. Feudalism was a system in which access to the most important resource, land, was controlled by a small class and distributed downward through loyalty and service. If frontier intelligence follows the same pattern, controlled by a small number of labs and distributed through commercial relationships and government approval, the structural parallel is hard to ignore.
If some frontier capabilities really do need staged release, the answer cannot just be “trust the labs and their largest partners”.
A more defensible approach would look like this:
- Restrictions should be narrow, capability specific, and time bounded. Not “this model is restricted” but “this specific capability is restricted for this defined period, after which the restriction is reviewed”.
- The criteria for access should be transparent and externally reviewable, not decided behind closed doors between a lab and its largest customers.
- Public interest institutions, universities, hospitals, public defenders, civil society organisations, should have a defined path to access, not just the richest firms.
- Democratic allies should avoid turning model access into pure corporate patronage.
- Independent third party oversight should exist where the risks are genuinely international. The model here is closer to the IAEA’s inspection regime for nuclear materials, or the WHO’s tiered pathogen sharing framework, than to a corporate NDA.
- The default goal should still be wider safe diffusion, not indefinite scarcity.
In other words, if access has to be gated, it should be governed like a public risk problem, not like a private moat.
That distinction matters.
One leads to temporary safety measures. The other leads to a world where a handful of institutions become the licensed wholesalers of thought.
Conclusion
If a model crosses a real cyber risk threshold, some immediate withholding can be justified. That is not the issue.
The issue is the political logic that follows if nobody resists it.
Controlled access to intelligence is not another product tiering decision. It is a question about who gets the best tools for learning, building, securing, governing, and competing. In a world where “merit” starts to blur into “who was allowed to use the strongest intelligence multiplier”, the access question is not secondary to the capability question. It is the capability question.
The argument is no longer only about whether AI becomes powerful. It is about whether access to that power becomes a universal capability, a licensed privilege, or an instrument of statecraft.
Once intelligence itself becomes strategically rationed, society does not merely become more unequal. It becomes more hierarchical in the deepest possible way: not just unequal in wealth, but unequal in who gets to think with the strongest machines.