Policy
Trump Administration Officially Abandoned Tiered ‘Framework for Artificial Intelligence Diffusion’
On Wednesday, the Trump administration announced they will be scrapping the Biden-era legislation in favor of “a much simpler rule that unleashes American innovation and ensures American dominance”, according to a spokesperson from the Department of Commerce.
As detailed last week, Biden’s Framework for Artificial Intelligence Diffusion sought to block exports of top-end AI components to US adversaries like China, Russia, Iran, and North Korea using a tiered approach.
The tiered framework grouped countries by risk level, with stricter rules for lower-tier nations.
Current Commerce officials consider this approach “unenforceable”. They hint at concerns that countries previously designated as Tier 3 could navigate restrictions by simply purchasing second-hand restricted components from countries higher up the list.
Precedence for this tactic already exists under current controls. While the Trump administration has not provided details on what their alternative plans will look like, they will likely aim for greater control with bilateral agreements and a global licensing system.
The Framework for Artificial Intelligence Diffusion was set to come into force on May 15, and the Trump Administration has not yet given a timeline on their plans. So, expect developments in the coming weeks.
The bill, requiring chip manufacturers to fit their chips with location-tracking devices, was introduced by Republican Senator Tom Cotton of Arkansas on Friday.
The bill instructs companies to alert the Bureau of Industry and Security if their chips are diverted away from their intended buyer or tampered with.
Putting the bill in the context of the so-called AI race between the US and China, Cotton commented that "with these enhanced security measures, we can continue to expand access to U.S. technology without compromising our national security".
Given the perception that the stakes of AI proliferation in China are high, the measure may garner bipartisan support.
The full bill text can be found here.
US Federal Courts Announced Interest in Using AI to Improve Court System
The Federal court’s AI lead, Paul Drutz-Hannahs made the case that the decentralized nature of US courts provides an ideal testing ground for testing competing uses of AI.
Courts operate autonomously in terms of workflow, but share the same objectives. Drutz-Hannahs compared these conditions to the idea of states as testing grounds for democracy.
The same analysis can be made of federal agencies and other governing bodies, another area where we are seeing a shift towards AI-centred working.
Several such developments occurred this week.
The FDA has named its first AI chief, Jeremy Walsh, signalling intentions to modernize its practices.
US Special Operations Command has announced it is piloting AI tools to streamline acquisition workflows.
And the Marine Corps released an AI implementation plan, aiming to enhance decision-making, tactical innovation, public-private partnerships as part of a broader digital transformation.
Tech leaders, Sam Altman (OpenAI), Lisa Su (AMD), Michael Intrator (CoreWeave), and Brad Smith (Microsoft) were called to speak at a hearing in the Senate on Thursday and expressed their views on how to secure US primacy in AI.
All four called for a loosening of controls with the aim of boosting exports.
Brad Smith made the case that the defining issue in the AI race is “whose technology is most broadly adopted in the rest of the world”.
(Notably, this statement comes in the same week that Microsoft barred their workforce from using DeepSeek).
Loosening export controls would almost certainly personally benefit the four tech leaders. However, they made the point that a broader investment is needed in AI infrastructure such as data centres, power stations, education and training to achieve what they see as the mutual goal of AI proliferation.
Industry leaders and the current administration seem to share the opinion that while it is crucial to prevent China from acquiring high-spec chips, Biden-era legislation was excessive and ultimately counterproductive.
The Testing and Evaluation Systems for Trusted Artificial Intelligence Act was reintroduced in the Senate this week, after it failed to reach the floor last year.
The bill aims to combine National Institute of Standards and Technology expertise and Department of Energy labs to pilot a program for the creation of measurement standards to assess AI systems on safety, reliability, security, privacy, and data bias.
The bill would instruct the partnership to form a strategy for measurement standards, and the initial testbed would be reported back to Congress within 180 days of the first tests.
Though it failed last time, growing emphasis on the need for uniform safety standards for AI may now create momentum for such legislation.
One member behind the bill, Sen. Ben Ray LujánD-NM), stated “AI has reached every sector in our country and driven innovation, but we cannot ignore the vulnerabilities and risks that come with it”.
California’s Privacy Protection Agency Backed Down on AI Regulations Following Backlash
The California agency has toned down draft regulations on AI and other automated technology following concerns raised by business groups and Governor Newsom.
The regulation aimed at improving safeguarding around non-human, automated decision-making systems such as those used in behavioral advertising, hiring and firing, education, and financial determinations.
Governor Newsom warned that the draft regulations were not business friendly and undermined existing legislation in place at the state level.
The revised draft may reduce the cost for businesses in the first year of enforcement from $834 million to $143 million and means 90% of businesses required by the original draft to comply will no longer have to do so, according to agency staff.
The deadline for public comments on the revised draft is June 2 and the rules would come into force by 2027.
Press Clips
Shakeel Hashim and Lynette Bye on the future of OpenAI. ✍
Kathrin Gardhouse on strengthening cybersecurity via DoD procurement. ✍
Anton Leicht on The New AI Policy Frontier. ✍
SCSP: Episode 76: Dr. Nandita Balakrishnan and Anna Knack on Applying AI to Strategic Warning. 🔉
A legal expert’s take on OpenAI’s backtracking this week. 📽
Jack Clark on AI’s uneven impact. 🔉
Arnold Kling talks AI, productivity statistics, and the Solow paradox. ✍
From Epoch:
Josh You asks how far reasoning models can scale? ✍
Anson Ho asks “where’s my ten minute AGI?” ✍
Max Tabarrok on post-Malthusian AI. ✍
Kathrin Gardhouse on the use of procurement for strengthening frontier model security. ✍


