Last Week, This Week #6 – A New Year
Parting export controls from the Biden administration, and the return of SB 1047.
Delivered to your inbox every Monday, Last Week This Week is a rundown of the previous week’s happenings in AI governance in America. News, articles, opinion pieces, and more, to bring you up to speed for the week ahead.
Things are beginning to heat up again as lawmakers and government officials return to work, and the presidential transition looms. This week’s edition–the first of the New Year–covers policy and governance developments over the past two to three weeks. Notably, there has been action at both the federal and state levels.
At the federal level, the big story is the leaked-then-quickly-announced export controls, including controls on model weights. At the state level, Scott Wiener appears to be readying himself for another attempt at frontier model regulation a la SB 1047, and similar bills have been introduced in the Texas and New York state governments. If enacted, these bills would see three of the four most populous states implementing SB 1047-like regulation on frontier model developers.
These developments, and more, detailed below.
As a parting gift, the Biden administration and the Bureau of Industry and Security (BIS) announced a new set of export controls–its new ‘AI diffusion’ rules.
Secretary of Commerce Gina Raimondo said that the policy would “protect against the national security risks associated with AI, while ensuring controls do not stifle innovation or US technological leadership.” This expression, and others like it, are now so overused among American officials that they are about as illuminating as saying, “we’re going to more of the good thing and less of the bad thing.” Clichés aside, these changes also appear to push us ever further along the path of technological competition and general hostility with China, in two crucial ways.
First, the new policy further refines an existing hierarchy of America’s allies and adversaries. In short, the proposed changes would place restrictions on the export of advanced chips, while creating preferential exceptions through the use of the validated end user (VEU) program. Originally introduced in 2007, the VEU program aims to facilitate high-technology trade by allowing exporters to ship certain controlled items to pre-approved entities under a general authorization instead of under multiple individual export licenses. Importantly, the new rules bifurcate the existing system into two new classifications: universal and national VEUs – UVEU and NVEU, respectively. Close allies like the United Kingdom and many European nations fall under the UVEU category, and will face little to no restrictions, while other countries will face more significant restrictions. In Transformer, Shakeel Hashim describes the material implications of the rule:
In 2025, the draft lists that cap as 507,000,000 in total processing power (TPP), rising to 1,020,000,000 by 2027. For comparison, CSET estimates that an Nvidia H100 has a TPP of 15,832, so the maximum-sized cluster a country could build this year is of about 32,000 H100s.
However, the US will allow those limits to be circumvented under the “validated end user” (VEU) authorisation program, which allows companies to build clusters in Tier 2 countries. To qualify, data centres will need to meet stringent security requirements, including compliance with FedRAMP High standards, which requires annual third-party security audits. But if they do comply, companies will be able to build larger clusters: according to the draft, clusters of up to 633,000,000 TPP in Q1 of 2025, rising to 5,064,000,000 TPP in Q1 of 2027 — the equivalent of a 320,000 H100 cluster.
The ratios are worth noting. Current frontier models are estimated to have been trained on clusters equivalent in size to around 20,000 H100s, and the forthcoming generation of models are thought to be being trained on clusters of around 100,000 H100s — the size of xAI’s new Memphis cluster. According to the draft numbers (which, again, may change), Tier 2 countries will be allowed to build a cluster of up to 32,000 H100s in 2025. VEUs, meanwhile, can build clusters of about 40,000 H100s in Tier 2 countries.
Second, for the first time, the announcement includes controls on model weights. As the first attempt of its kind–that of regulating the underlying intellectual property associated with frontier models–this move is an escalation in the ongoing strategy of technological containment, and the attempt to stem the development of powerful AI among America’s adversaries. A press release and the full text of the bill are available online.
A new House bill was introduced that would “prohibit American dollars from being invested in critical sectors of the Chinese military and economy.”
Introduced by Representatives John Moolenaar (R-MI)—Chairman of the Select Committee on the CCP—and Andy Barr (R-KY), the bill follows the Select Committee’s economic report, which described “problematic outbound capital investments” into China, and made a set of related recommendations. Some of those recommendations form the basis of this bill, the key provisions of which are as follows:
Sanctions: Imposing sanctions on individuals and entities in the People’s Republic of China (PRC) deemed to pose a threat to U.S. national security. The sanctions include freezing assets and blocking transactions involving property within U.S. jurisdiction.
Investment Restrictions: Prohibiting and requiring notification for certain investments by U.S. persons in PRC sectors critical to national security, such as defense and surveillance technology.
Transparency: Establishes a public database of entities considered a threat, while ensuring confidentiality for certain data; Requires regular reporting to Congress on enforcement actions and national security assessments related to investments in the PRC.
Multilateral Engagement: Promotes coordination with allied nations to establish similar mechanisms, and prevent technology transfer or investments that could benefit adversarial states.
The full text and a press release from the Select Committee are available.
California State Senator Scott Wiener introduced SB 53.
This bill “[declares] the intent of the Legislature to enact legislation that would establish safeguards for the development of AI frontier models and that would build state capacity for the use of AI.” Seemingly undeterred by the failure of his previous legislation, SB 1047, Senator Wiener appears to be preparing for a second attempt at passing a meaningful piece of frontier model regulation.
On Christmas Eve, Texas State Rep. Giovanni Capriglione (R) introduced the Texas Responsible Artificial Intelligence Governance Act (TRAIGA).
Similarly to SB 1047, TRAIGA attempts to regulate model developers by placing a duty of “reasonable care” on them–specifically relating to instances of algorithmic discrimination against individuals on the basis of protected characteristics–and making those companies liable. Dean Ball has two essays on TRAIGA, here and here. The full text of the bill is available here.
New York assemblymember Alex Bores announced the Responsible AI Safety and Education (RAISE) Act.
This bill, as of yet unpublished, would place restrictions and liabilities on AI lavs for the development and deployment of their models, while seeking to “address many of the concerns that blocked SB 1047 from passing into law.”
New York Governor Kathy Hochul signed into law the legislative oversight of automated decision-making in government (LOADInG) act.
The act aims to “regulate the use of automated decision-making systems and artificial intelligence techniques by state agencies.”
Sendhil Mullainathan, MIT professor, discusses economics in the age of AI.
Sam Altman reflects on two years of ChatGPT, speculating that 2025 will see the first AI agents in the workforce, helping to usher in the “glorious future.”
President-elect Trump announced a $20 billion investment in American data centers by Emirati property developer DAMAC, and its CEO, billionaire Hussain Sajwani.
Miles Brundage, formerly of OpenAI, makes the case for security at frontier model developers as a policy priority.
Chris Miller (author of Chip War) and Lennart Heim (RAND) join ChinaTalk to discuss geopolitics in the age of test-time compute.







