Delivered to your inbox every Monday, Last Week in AI Policy is a rundown of the previous week’s happenings in AI policy and governance in America. News, articles, opinion pieces, and more, to bring you up to speed for the week ahead.
Policy
AI Leaders Respond to US AI Action Plan
OpenAI urged federal policymakers to impose strict bans on AI models produced by state-affiliated labs in China, specifically identifying DeepSeek as posing serious data security and intellectual property risks.
Google made the case for expansive copyright access, arguing that restrictive copyright and patent laws could impede AI progress. They recommended federal regulations that prioritize clarity and consistency, and an industry-friendly framework to sustain America's global leadership.
Anthropic advocated for more active government oversight, including mandatory security evaluations of frontier AI systems, tighter export restrictions, and comprehensive testing infrastructure to proactively manage national security risks.
Other leading voices in tech weighed in, including Palantir and Andreessen-Horowitz.
Congress Targets Chinese Semiconductors, Strengthens Tech Alliances
Congressman John Moolenaar, Chair of the House Select Committee on China, proposed applying tariffs to all semiconductors manufactured in China entering the United States.
He recommended coordination with partners like Japan, South Korea, and Taiwan to undercut China's ability to dominate the semiconductor market.
Semiconductor manufacturing continues to fracture along geopolitical lines: SoftBank recently acquired a former Sharp manufacturing plant in Japan for $676 million, intending to use the facility to support its ongoing infrastructure partnership with OpenAI.
Oversight Democrat Raises Concerns on AI in Federal Agencies
Rep. Gerry Connolly (D-Va.) warned federal agencies about using unauthorized AI tools without complying with privacy laws and regulations.
Connolly expressed concerns that Elon Musk’s Department of Government Efficiency (DOGE) may have improperly used AI tools, potentially violating laws like the Privacy Act and FISMA.
Connolly referenced reports indicating DOGE members used unauthorized AI tools like Inventry.ai, which hasn't been approved through the FedRAMP security process.
Agencies are required to respond to Connolly’s inquiry with documentation by March 26, detailing data usage, AI tools employed, and individuals involved.
Tech Leaders Warn Against NIST Cuts
Top technology associations warned Commerce Secretary Howard Lutnick that cuts to the National Institute of Standards and Technology (NIST) threaten U.S. leadership in artificial intelligence.
The warning comes as the Trump administration makes significant job cuts across federal agencies, including recent dismissals of NIST employees.
The letter argued for the importance of continuing NIST's work in public-private partnerships, standards-setting, and international harmonization to maintain global competitiveness.
Signatories included TechNet, the Center for AI Policy, and the Computer & Communication Industry Association, among others.
Press Clips
It's time to take AI job loss seriously (Slow Boring)
AGI, Governments, and Free Societies (Justin Bullock, Samuel Hammond, Seb Krier)
Humanoid Robots: The Long Road Ahead (ChinaTalk)
Glimpses of AI Progress (AI Pathways)
Gigawatt Dreams and Matroyshka Brains Limited By Datacenters Not Chips (SemiAnalysis)
DeepSeek, a National Treasure in China, is Now Being Closely Guarded (The Information) 🔒
How Generative A.I. Complements the MAGA Style (New York Times) 🔒
Powerful A.I. Is Coming. We’re Not Ready. (New York Times) 🔒





