Juan Londoño
In early April, Anthropic announced the release of Claude Mythos, a powerful cybersecurity-focused model that promises to address most cybersecurity vulnerabilities at record speeds. However, due to the Pentagon’s designation of Anthropic as a “supply chain risk” and the administration’s subsequent actions, the federal government may have denied itself access to Mythos. Some government agencies have already decided to use the model despite the Department of Defense’s designation, as they find the cybersecurity protections it provides critical. While the government continues to defend the supply chain risk designation, the White House has already reengaged with Anthropic CEO Dario Amodei to reach an agreement that would allow the executive branch to resume using Claude.
The White House’s back-and-forth regarding the use of Anthropic’s products highlights one of the most serious perils of attempts to regulate emerging technologies. These are rapidly evolving industries in which context, actors, and capabilities often evolve at a pace policymakers find extremely difficult to keep up with. The relevant actors at play can change drastically over the span of months, and there is no guarantee that the market leaders of today will hold that position a couple of years from now. For example, six years ago, OpenAI was the clear industry leader after releasing GPT‑3, while Anthropic was a largely unknown name. In less than a year, Meta completely revamped its approach to AI, forming Meta Superintelligence Labs in July 2025 and releasing a brand-new flagship AI model—Muse Spark—in April 2026.
While Mythos’ capability to parse through large quantities of code and exploit or detect cybersecurity weaknesses in a short amount of time was previously deemed as feasible, it was almost impossible to predict how quickly this capability could be developed. AI has advanced rapidly, resulting in both greater cybersecurity risks and potential for better defenses. For reference, only seven years passed between OpenAI’s concerns that GPT‑2, a model that was limited to text generation and would be considered significantly out of date by today’s standards, was “too dangerous to release” and the release of a model with the complexity of Mythos. Similarly, what seems unobtainable, uncanny, or dangerous today might become ubiquitous in the future. This also shows why a light-touch approach is beneficial. Onerous regulation of AI would not have allowed these technologies to develop as quickly. While there may be concerns about the potential risks of Mythos, the consequences of heavy-handed regulation could be more significant.
While there are concerns about government use of AI technology, there are also sectors where it can be critical, such as cybersecurity. Instead of designating Anthropic as a supply chain risk, the Pentagon could have simply rescinded the contract, sought another vendor that could fulfill its demands, and then reevaluated when it needed specific resources. Labeling the company a supply chain risk not only raises constitutional concerns regarding the action but also risks tying the administration’s own hands when it comes to accessing the best product on the market.
With a growing number of states considering their own AI policy, these governments, too, should be wary of similar consequences. While state governments could benefit from deploying a model like Mythos to protect their digital infrastructure, some are considering statewide bans on vital AI infrastructure, including data centers.
The White House’s back-and-forth with Anthropic should offer policymakers a valuable lesson: Pick winners and losers in a dynamic market at your own risk. Just as the administration did not foresee how quickly it would need Anthropic’s services when it blacklisted the company’s products, policymakers rushing to regulate AI cannot accurately foresee the costs and consequences of heavy-handed AI regulations. This is why a principles-based, narrowly targeted, light-touch approach to regulation is better suited to emerging technologies. This provides for a more flexible, less prescriptive response to the rapidly changing environments common in nascent industries. Hopefully, the administration and state governments will heed this lesson and think twice before recklessly wielding the regulatory hammer in the future.
