Federal Judge Blocks Pentagon's Ban On Anthropic, Calls It 'Classic Illegal First Amendment Retaliation'

· Free Press Journal

A federal judge in San Francisco has dealt a significant legal blow to the Trump administration, granting AI company Anthropic a preliminary injunction that temporarily blocks the Pentagon's effort to designate it as a national security threat. The ruling, reported by CNBC, marks a major victory for the Claude-maker in its escalating standoff with the US government.

Visit newsbetting.bond for more information.

What did the judge say?

Judge Rita Lin issued the ruling on Thursday, two days after lawyers for Anthropic and the US government appeared in court for a hearing. In a strongly worded 43-page opinion, Lin wrote that the supply chain risk designation violated the company's First Amendment and due process rights, adding that the measures did not appear directed at the government's stated national security interests. "Nothing in the governing statute supports the Orwellian notion that an American company may be branded a potential adversary and saboteur of the US for expressing disagreement with the government," Lin wrote.

'Unlawful': Anthropic Sues Trump Administration Over Pentagon Blacklist

The order bars the Trump administration from implementing or enforcing the president's directive, and limits the Pentagon's efforts to designate Anthropic as a threat to US national security. Lin paused the ruling for seven days to allow the government to seek relief from an appeals court.

How it got here?

Anthropic signed a $200 million contract with the Pentagon in July, but as the company began negotiating Claude's deployment on the DOD's GenAI.mil platform in September, talks stalled. The DOD wanted unfettered access to Claude for all lawful purposes, while Anthropic wanted assurance that its technology would not be used for fully autonomous weapons or domestic mass surveillance.

Defense Secretary Pete Hegseth took the unprecedented step of labeling Anthropic a supply chain risk in February, and both Hegseth and President Trump ordered federal agencies to cease using Claude and sever ties with companies that do business with the AI startup. The designation, normally reserved for foreign adversaries, required defense contractors including Amazon, Microsoft, and Palantir to certify they do not use Claude in their military work.

Anthropic Vows To Fight Pentagon's 'Supply Chain Risk' Label In Court As CEO Says Talks With DoD Are Ongoing

'An attempt to cripple Anthropic'

During Tuesday's hearing, Judge Lin said her concern was that Anthropic was being "punished" and questioned whether the DOD had violated the law, noting that the Pentagon's decision looked like an attempt to cripple the company. She said the actions were "troubling" because they did not appear tailored to the stated national security concerns, which could have been addressed simply by the Pentagon choosing to stop using Claude.

The government argued that Anthropic had rendered itself untrustworthy, and that the company could theoretically update Claude in ways that might endanger national security, a claim Lin did not find persuasive.

Anthropic's response

An Anthropic spokesperson said the company was "grateful to the court for moving swiftly" and "pleased" that the judge agreed the company was likely to succeed on the merits, adding that its focus remains on working productively with the government to ensure all Americans benefit from safe and reliable AI.

'We Don't Need It': Donald Trump Orders Federal Ban On Anthropic AI, Pentagon Labels Firm 'National Security Risk'

A final verdict in the case could still be months away. Anthropic has also filed a separate lawsuit for a formal review of the DOD's determination in the US Court of Appeals in Washington DC. The Justice Department can now seek an emergency stay from the 9th Circuit before the injunction takes effect.

Read full story at source