Microsoft, Google and xAI to give US government early access to AI models for security checks
US govt securing early access to identify threats ranging from cyberattacks to military misuse
Major tech companies, including Microsoft, Google, and xAI, have agreed to provide the US government with early access to advanced AI models for national security testing, amid rising concerns over the powerful capabilities of new systems like Mythos developed by Anthropic.
WASHINGTON – The US Department of Commerce announced that its Centre for AI Standards and Innovation will evaluate cutting-edge AI models before public release to assess potential security threats.
The initiative follows a 2025 commitment by the administration of Donald Trump to collaborate with technology firms in identifying national security risks linked to artificial intelligence.
Microsoft said it will partner with government scientists to test AI systems for unexpected behaviours, while also developing shared datasets and evaluation methods. The company previously entered a similar agreement with the UK’s AI Security Institute.
Growing concern in Washington centres on how advanced AI could enable cyberattacks or military misuse. By gaining early access to these “frontier” models, officials aim to detect risks before widespread deployment.
Recent developments, particularly Anthropic’s Mythos, have intensified global attention due to their potential to significantly enhance hacking capabilities. “Independent, rigorous measurement science is essential to understanding frontier AI and its national security implications,” said CAISI Director Chris Fall.
The move builds on earlier partnerships with OpenAI and Anthropic established in 2024, when the institute operated under the previous administration of Joe Biden as the US Artificial Intelligence Safety Institute. It was then led by tech adviser Elizabeth Kelly, who has since joined Anthropic.
CAISI, now the central hub for US government AI testing, has completed over 40 evaluations, including on models not yet publicly available. Developers often provide versions of their systems with reduced safety guardrails so vulnerabilities can be thoroughly examined.
Meanwhile, the Pentagon recently signed agreements with seven AI companies to deploy advanced technologies across classified military networks. However, Anthropic was not included, reportedly due to ongoing disagreements over safeguards governing military use of its AI systems.