Select Language:
Microsoft, Google, and Elon Musk’s xAI have agreed to provide the U.S. government with early access to upcoming artificial intelligence models for national security testing. This move comes as U.S. officials express concern over the hacking potential of Anthropic’s latest Mythos AI system.
The Department of Commerce’s Center for AI Standards and Innovation announced Tuesday that this agreement will enable the government to evaluate these models prior to deployment and conduct research assessing their capabilities and security vulnerabilities. This initiative fulfills a commitment made by the Trump administration in July 2025 to collaborate with tech companies in vetting AI models for national security risks.
Microsoft plans to team up with U.S. government scientists to test AI systems, focusing on detecting unexpected behaviors. They will develop shared datasets and testing workflows for Microsoft’s models and have already signed a similar partnership with the UK’s AI Security Institute.
Concerns are mounting in Washington over the national security threats posed by advanced AI systems. Early access to cutting-edge models aims to help U.S. officials identify potential dangers—ranging from cyberattacks to military misapplications—before these tools become widely adopted.
Recently, the development of powerful AI, including Anthropic’s Mythos, has sparked global debate among American officials and corporations about their capacity to enable hackers and malicious actors.
“Robust, independent measurement science is critical to understanding frontier AI and its implications for national security,” stated Chris Fall, director of the CAISI.
This effort builds upon previous agreements with OpenAI and Anthropic, established in 2024 under the Biden administration when CAISI was known as the U.S. Artificial Intelligence Safety Institute. During Biden’s term, the institute focused on developing AI testing procedures, safety definitions, and voluntary standards. Notably, Elizabeth Kelly, Biden’s tech adviser and current Anthropic executive, once led the institute.
CAISI, serving as the government’s primary AI testing hub, reports having completed over 40 evaluations, including on advanced models not yet available to the public. Developers often provide versions of their models with reduced safety restrictions, allowing the center to probe for potential security risks.
xAI declined to comment immediately. Google also opted not to respond.
Last week, the Pentagon announced agreements with seven AI firms to deploy their advanced systems on classified military networks, aiming to expand AI capabilities across defense operations. However, Anthropic was not included, reportedly due to a disagreement with the Pentagon over safety guardrails in their military applications.



