Microsoft has thrown its support behind artificial intelligence company Anthropic in a legal battle against the United States Department of Defense, which recently designated the AI startup as a potential supply-chain risk.

The tech giant filed an amicus brief in a federal court in San Francisco, supporting Anthropic’s request for a temporary restraining order that would halt the Pentagon’s decision while the court reviews the case.

The dispute marks a significant escalation in tensions between AI developers and the US military over how emerging technologies are used in defence systems.

Microsoft Warns of Disruptions to Defence Technology

Microsoft said the Pentagon’s designation could disrupt critical technology systems used by government contractors.

The company integrates Anthropic’s AI technology into solutions it provides to the US military, meaning restrictions on the startup’s tools could have ripple effects across defence-related projects.

In its court filing, Microsoft argued that the Pentagon’s decision should be paused until the legal case is fully examined.

“Should this action proceed without the entry of a temporary restraining order, Microsoft and other government contractors will be forced to account for a new risk in their business planning,” the company said.

Anthropic Challenges Pentagon ‘Supply-Chain Risk’ Label

Anthropic, the developer behind the Claude AI model, filed its lawsuit earlier this week seeking to prevent the Pentagon from placing it on what is effectively a national security blacklist.

The company argues that the designation could limit its ability to work with defence contractors and government agencies.

Microsoft said a temporary restraining order would give suppliers time to avoid costly disruptions and redesigns while the legal dispute is resolved.

The Pentagon reportedly gave itself six months to phase out Anthropic technology, but contractors that rely on those systems were not given the same transition period.

Growing Debate Over AI Use in Military Systems

The case highlights broader concerns about how advanced artificial intelligence tools are integrated into national security systems.

Microsoft said pausing the Pentagon’s designation would allow time to negotiate safeguards that ensure AI technologies are used responsibly in defence environments.

The company also stressed that the delay would help ensure AI systems are not used for domestic mass surveillance or autonomous warfare without human oversight.

AI Industry Voices Support for Anthropic

Anthropic has also received backing from other leaders in the artificial intelligence community.

Earlier this week, 37 researchers and engineers from companies including OpenAI and Google filed their own amicus brief supporting Anthropic’s challenge.

The growing coalition of technology experts argues that restrictions on AI companies could slow innovation and disrupt collaboration between private technology firms and government agencies.

What Happens Next

The federal court in San Francisco will now consider whether to grant the temporary restraining order requested by Anthropic.

If approved, the order would temporarily halt the Pentagon’s designation while the case proceeds.

The outcome could have major implications for how artificial intelligence companies interact with government and defence institutions, particularly as AI technologies become increasingly central to national security strategies.