United States Secretary of Defense Pete Hegseth directed the Pentagon to designate Anthropic as a âsupply-chain riskâ on Friday, sending shockwaves through Silicon Valley and leaving many companies scrambling to understand whether they can keep using one of the industryâs most popular AI models.
âEffective immediately, no contractor, supplier, or partner that does business with the United States military may conduct any commercial activity with Anthropic,â Hegseth wrote in a social media post.
The designation comes after weeks of tense negotiations between the Pentagon and Anthropic over how the US military could use the startupâs AI models. In a blog post this week, Anthropic argued its contracts with the Pentagon should not allow for its technology to be used for mass domestic surveillance of Americans or fully autonomous weapons. The Pentagon asked that Anthropic agree to let the US military apply its AI to âall lawful usesâ with no specific exceptions.
A supply chain risk designation allows the Pentagon to restrict or exclude certain vendors from defense contracts if they are deemed to pose security vulnerabilities, such as risks related to foreign ownership, control, or influence. It is intended to protect sensitive military systems and data from potential compromise.
Anthropic responded in another blog post on Friday evening, saying it would âchallenge any supply chain risk designation in court,â and that such a designation would âset a dangerous precedent for any American company that negotiates with the government.â
Anthropic added that it hadnât received any direct communication from the Department of Defense or the White House regarding negotiations over the use of its AI models.
âSecretary Hegseth has implied this designation would restrict anyone who does business with the military from doing business with Anthropic. The Secretary does not have the statutory authority to back up this statement,â the company wrote.
The Pentagon declined to comment.
“This is the most shocking, damaging, and over-reaching thing I have ever seen the United States government do,â says Dean Ball, a senior fellow at the Foundation for American Innovation and the former senior policy advisor for AI at the White House. âWe have essentially just sanctioned an American company. If you are an American, you should be thinking about whether or not you should live here 10 years from now.”
People across Silicon Valley chimed in on social media expressing similar shock and dismay. âThe people running this administration are impulsive and vindictive. I believe this is sufficient to explain their behavior,â Paul Graham, founder of the startup accelerator Y Combinator said.
Boaz Barak, an OpenAI researcher, said in a post that âkneecapping one of our leading AI companies is right about the worst own goal we can do. I hope very much that cooler heads prevail and this announcement is reversed.â
Meanwhile, OpenAI CEO Sam Altman announced on Friday night that the company reached an agreement with the Department of Defense to deploy its AI models in classified environments, seemingly with carveouts. âTwo of our most important safety principles are prohibitions on domestic mass surveillance and human responsibility for the use of force, including for autonomous weapon systems,â said Altman. âThe DoW agrees with these principles, reflects them in law and policy, and we put them into our agreement.â
Confused Customers
In its Friday blog post, Anthropic said a supply chain risk designation, under the authority 10 USC 3252, only applies to Department of Defense contracts directly with suppliers, and doesnât cover how contractors use its Claude AI software to serve other customers.
Three experts in federal contracts say itâs impossible at this point to determine which Anthropic customers, if any, must now cut ties with the company. Hegsethâs announcement âis not mired in any law we can divine right now,â says Alex Major, a partner at the law firm McCarter & English, which works with tech companies.
