Anthropic announces new AI models for military and defense agencies
Last Week in AI Policy #21 - Jun 9, 2025
Delivered to your inbox every Monday, Last Week in AI Policy is a rundown of the previous week’s happenings in AI governance in America. News, articles, opinion pieces, and more, to bring you up to speed for the week ahead.
Policy
Anthropic released Claude Gov models for national security agencies.
Anthropic introduced a new set of models specifically aimed for U.S. defense and intelligence agencies.
They are "already deployed by agencies at the highest level of U.S. national security,” according to the company.
They provide “enhanced performance for critical government needs and specialized tasks,” including:
Improved handling of classified materials (models refuse less when handling it).
Greater understanding of intelligence/defense documents.
Enhanced proficiency in NatSec-relevant languages/dialects.
In large part, this is just competition heating up: the next development in a line of public-private partnerships with AI developers, particularly relating to defense and national security.
After months of speculation about its future under the Trump administration, the US AISI has officially rebranded as the Center for AI Standards and Innovation (CAISI).
CAISI will “serve as industry’s primary point of contact within the U.S. Government to facilitate testing and collaborative research related to harnessing and securing the potential of commercial AI systems.”
It will do this by:
Developing guidelines and best practices
Leading evaluations of AI capabilities
Evaluating adversary AI systems, adoption of foreign AI, and the state of international competition
Coordinating among federal departments and agencies
Representing U.S. interests internationally
Garrison Lovely notes the not-so-subtle omission of rogue AI from being considered “demonstrable.”
Also notable was the explicit attempt to distance the new agency from the “censorship and regulations” that had been used to “limit innovators” under the guise of national security.
So, more of the same rhetorical separation of the current administration from the past one. But we’ll have to wait (three months and counting) for the more substantial AI Action Plan.
Senate proposed an alternative to the state moratorium on AI legislation.
Previously, the Republican megabill sought to block all state legislation on AI by ten years.
While the bill passed through the House Commerce Committee easily, it was challenged in the Senate, with some arguing that it would violate the Byrd Rule.
In a revised version, the Senate Commerce Committee Republicans is now trying to tie the moratorium to federal funding for broadband access.
The committee rewrote the moratorium to make it a prerequisite for receiving funding from the Broadband Equity, Access, and Deployment (BEAD) program.
Within the proposal was a clause stipulating that “no amounts” of BEAD money should go to states not in compliance with the moratorium.
This appears to be a (possibly misguided) attempt to make the bill Byrd-compliant.
Revisions or not, the moratorium faces widespread opposition from both sides of the aisle.
Press Clips
Peter Wildeford joined the Center for AI Policy Podcast to talk AI policy and forecasting. 📽
Miles Brundage on AI as a liquid. ✍
Data from Bharat Chandar on AI job losses. 🐦
Anton Leicht on political momentum building around AI job losses. ✍
Kevin Xu on a US-China ‘tech grand bargain’ ✍
Zilan Qian explained how banned U.S. models are bought and used in China. ✍
Nathan Lambert joined ChinaTalk to discuss model sycophancy. 📽




