Thomas A. Berry
Anthropic is a tech company best known for developing the AI model Claude. Until recently, Anthropic contracted with the Department of Defense (also known as the Department of War) to provide AI technology for the department’s use. But Anthropic drew two red lines: it would not provide AI services to aid in lethal autonomous warfare or mass surveillance of Americans. The department grew frustrated with Anthropic’s refusal to depart from these policies. This stalemate came to a head in February of this year, when the department not only ended its contracts with Anthropic (subject to a six-month wind-down period) but also labeled Anthropic a “Supply-Chain Risk to National Security.”
The latter designation forbids any business that contracts with the federal government from working with Anthropic in any capacity, devastating Anthropic’s business.
Now Anthropic has sued the department, arguing that these government actions were unconstitutional retaliation against Anthropic for its First Amendment-protected speech. And Cato has joined the Foundation for Individual Rights and Expression (FIRE), the Electronic Frontier Foundation (EFF), Chamber of Progress, and the First Amendment Lawyers Association (FALA) to file an amicus brief in federal district court supporting Anthropic (with thanks to Addison Bennett, Sopen Shah, and Sarah Grant of Perkins Coie for drafting the brief).
In our brief, we explain why Anthropic’s design choices for its AI models constitute First Amendment-protected expression. Claude is fundamentally expressive. It is a “large language model” that takes in tremendous amounts of data and engages in a pattern of “reasoning” to respond to user prompts. It can deliver a range of outputs, including providing written responses to questions and generating structured databases. Everything it does—from responding to requests to processing data to helping the user complete tasks—involves this back-and-forth exchange of expressions and ideas.
Humans—developers at Anthropic—created Claude. Anthropic designed the AI system and identified data to “train” the model. Claude works only because Anthropic instills it with a set of editorial directions, which are inherently expressive and receive First Amendment protection. In that way, its algorithmic responses are akin to traditional media curated by human editors.
The Defense Department punished Anthropic because Anthropic refused to change its algorithm and jettison its guidelines to produce outcomes of the Pentagon’s choosing. The Pentagon’s decision to punish Anthropic for its refusal to deploy Claude to match the Pentagon’s policy preferences constitutes retaliation against protected speech.
Further, our brief notes that there is no doubt about the government’s retaliatory motives. That is because the Pentagon has already announced them. In Secretary Hegseth’s announcement of his “final” decision designating Anthropic as a supply-chain risk, he chided Anthropic for purportedly failing to create an AI model that is sufficiently “patriotic.” He criticized Anthropic’s “ideological whims,” its policy of “effective altruism,” its supposed “virtue signaling,” and ultimately its “stance” on Claude’s design as “fundamentally incompatible with American principles.” And he made clear these were all based on whether Anthropic chooses to delete or alter Claude’s code.
This punishment of Anthropic for its protected speech violated the First Amendment. The district court should enjoin the Pentagon’s unconstitutional and retaliatory designation of Anthropic as a supply chain risk as promptly as possible.
