US Judge Blocks Pentagon’s Anthropic Ban Pending Review

Marcel Fuhrmann
/ 5 min read

US Federal Judge Temporarily Blocks Pentagon’s Anthropic Ban – Court Cites Likely First Amendment Violation

Key Takeaways

  • A US federal judge granted Anthropic a preliminary injunction against the Pentagon’s designation of the company as a supply chain risk.
  • The ruling temporarily halts a directive from President Donald Trump ordering federal agencies to stop using Anthropic’s chatbot, Claude.
  • Judge Rita Lin stated that the government’s actions appeared arbitrary and potentially retaliatory.
  • The dispute centers on failed negotiations over military use of Anthropic’s AI technology.

Court Blocks Pentagon’s Supply Chain Risk Designation

A US federal judge in San Francisco has temporarily blocked the Pentagon from enforcing its designation of AI company Anthropic as a national security supply chain risk. Judge Rita Lin of the District Court for the Northern District of California issued a preliminary injunction preventing the US Department of Defense from applying the label while legal proceedings continue.

The order also halts a directive from President Donald Trump that required all federal agencies to cease using Anthropic’s chatbot, Claude. The directive followed the Pentagon’s classification of the company as a security risk.

In her ruling, Judge Lin stated that nothing in the relevant statute supports the idea that an American company can be labeled a potential adversary or saboteur for expressing disagreement with the government. She described the measures taken by the Trump administration and Defense Secretary Pete Hegseth as broad punitive actions that appeared arbitrary, capricious, and an abuse of discretion.

Background: Failed Pentagon Contract Negotiations

The dispute originates from a July 2025 agreement between Anthropic and the Pentagon. Under that contract, Claude was set to become the first frontier AI model approved for use on classified US government networks.

Negotiations reportedly collapsed in February 2026 when the Pentagon sought to renegotiate the terms. According to the court record, the Department of Defense insisted that Anthropic allow military use of Claude for all lawful purposes and without restrictions.

Anthropic opposed these conditions. The company maintained that its technology should not be used for lethal autonomous weapons or for mass domestic surveillance of Americans. This disagreement marked a turning point in the relationship between the company and the Defense Department.

On Feb. 27, President Trump ordered all federal agencies to stop using Anthropic products. In a public statement on Truth Social, he criticized the company in strong terms, accusing it of attempting to pressure the Department of War.

Legal Challenge and Allegations of Retaliation

Anthropic filed a lawsuit on March 9 in a federal court in Columbia, alleging that Defense Secretary Hegseth exceeded his authority by designating the company a national security supply chain risk.

During a 90 minute hearing in San Francisco on March 24, Judge Lin questioned government lawyers about whether Anthropic was being punished for publicly criticizing the Pentagon’s contracting position. The judge’s March 26 ruling stated that punishing the company for bringing public scrutiny to the government’s stance would constitute classic illegal First Amendment retaliation.

The preliminary injunction indicates that the court believes Anthropic is likely to succeed on the merits of its constitutional claim. In response to the ruling, the company said it was grateful that the court acted swiftly and agreed that it is likely to prevail in the case.

Market Position and Government Impact

Anthropic held a leading position in the enterprise AI market as of 2025, with a reported 32 percent share, ahead of OpenAI at 25 percent, according to Menlo Ventures. A government wide ban on Anthropic products could have affected that position significantly, particularly given the importance of federal contracts in advanced technology sectors.

The temporary injunction prevents immediate enforcement of the federal ban while the legal process unfolds. For companies operating in technology driven markets, including those serving financial services, crypto infrastructure, or digital platforms, federal procurement decisions can influence competitive positioning and access to regulated sectors.

The case highlights how contractual disputes between private technology providers and US government agencies can escalate into broader regulatory and constitutional conflicts. It also underscores the legal limits that courts may impose on executive branch actions when constitutional rights are implicated.

Next Steps in the Legal Process

The preliminary injunction does not resolve the underlying lawsuit. It temporarily preserves the status quo while the court evaluates the full merits of Anthropic’s claims. Further proceedings will determine whether the Pentagon’s designation and the presidential directive can stand under statutory and constitutional scrutiny.

For now, federal agencies are not required to cease using Anthropic’s products under the blocked directive. The final outcome will depend on subsequent court rulings addressing both the scope of executive authority and the application of First Amendment protections in the context of federal contracting.

Our Assessment

The court’s preliminary injunction prevents immediate enforcement of a federal ban on Anthropic and suspends its designation as a supply chain risk. The ruling centers on constitutional concerns, particularly potential First Amendment retaliation. The case remains ongoing, with further judicial review set to determine whether the Pentagon’s actions and the presidential directive comply with US law.