Enterprises and startups utilizing Anthropic Claude through Microsoft's offerings can breathe easy, as Microsoft has reassured users that the AI model will remain accessible. This announcement comes amidst rising tensions between Anthropic and the Department of Defense (DoD).
As the first major tech firm to provide such assurance, Microsoft emphasizes that Anthropic's models will continue to be available to its clientele, despite the DoD's recent designation of the AI company as a supply chain risk. This designation arose after Anthropic declined to grant unrestricted access to its technology for applications deemed unsafe, such as mass surveillance and fully autonomous weapon systems.
The supply chain risk label is typically reserved for foreign adversaries, which means the Pentagon is unable to utilize Anthropic's products. Furthermore, any contractor or agency working with the DoD must certify that they do not employ Anthropic's models. In response, Anthropic has pledged to challenge this designation legally.
Microsoft, which provides a variety of products--including Office and cloud services--to numerous federal agencies, including the DoD, stated that it will maintain access to Anthropic's models within its platforms. A Microsoft spokesperson clarified that the legal team has determined that Anthropic's products, including Claude, can still be offered to customers outside of the DoD through platforms like M365, GitHub, and Microsoft's AI Foundry.
"Our focus remains on ensuring that Anthropic products are available to our customers, except when it involves direct contracts with the DoD," the spokesperson noted. This sentiment echoes Anthropic CEO Dario Amodei's commitment to contest the supply chain designation.
Despite the challenges, Claude continues to experience significant growth in the consumer sector, demonstrating resilience after Anthropic's refusal to comply with the DoD's demands.