Last Week, This Week #2 - December 2, 2024
China, China, China. Trump picks, Trump picks, Trump picks.
Delivered to your inbox every Monday morning, LWTW is a rundown of the previous week’s happenings in AI governance in America. News, articles, opinion pieces, and more, to bring you up to speed for the week ahead.
CSIS published a report on export controls, and their effectiveness in curtailing Chinese ambitions in semiconductor manufacturing. TLDR: they aren’t working. Despite U.S. efforts, China continues to invest heavily in domestic semiconductor equipment development, with R&D spending growing explosively. Ironically, export controls may have accelerated China's desire for technological self-sufficiency, with Chinese firms like Yangtze Memory Technologies Corporation launching full-blown "de-Americanization" campaigns even before strict controls were implemented. Notably, the report suggests that China's ambitions for semiconductor independence predate current export control efforts, with goals outlined as early as the 2015 "Made in China 2025" roadmap.
Senator Welch (D-VT) introduced the Transparency and Responsibility for Artificial Intelligence Networks (TRAIN) Act, which would compel greater transparency among generative AI developers. In particular, the act would provide a legal mechanism (an administrative subpoena, to be exact) for copyright holders to determine whether their material had been used in training a given generative AI model. Artists, writers, and musicians who could demonstrate a “good faith belief” that their work had been used to train a model would be able to subpoena training records. Failure to comply on the part of the developer would create a legal assumption that the copyrighted work had, in fact, been used. While the proposed legislation has been well-received within industry, it raises long-standing questions about intellectual property and fair use. Questions that have yet to be satisfactorily answered. Relatedly, Joshua Levine discusses ongoing copyright lawsuits against AI developers, and their potential impact on American AI innovation.
Benjamin Todd argues that AI development in China is 1-2 years behind the cutting edge not because of compute, but commitment. Despite controls on exports of high-end chips, he says, Chinese labs are more-or-less freely able to acquire black-market H100’s, such that the restrictions “just aren’t binding at scale.” As others have pointed out, and Todd acknowledges in an addendum, this contradicts claims coming directly from Chinese labs, who point to high-end chip embargos as a key constraint to model development. On the other hand, a back-of-the-envelope calculation (based on Epoch AI’s research) suggests that a two-year lag in chip quality would imply a 2x cost multiple for training runs. Given what we know about training run costs, this should still be eminently reasonable for China. So, what can we say about the true reason that China’s models still trail the US? As is often the case with AI, not much.
Trump is set to tap Kevin Hassett to lead the National Economic Council (NEC). Hassett chaired the Council of Economic Advisers (CEA). during Trump 1.0, and has previously advised assorted Republican presidential administrations and candidates. Whereas the CEA is the economic ‘brain’ of the White House – offering ad hoc expertise and analysis as a sort of internal consultancy service – the NEC is responsible for coordinating between various heads of departments in developing economic policy. Charles Blahous of the Mercatus Center illustrates the difference with the following example:
“If the president sought an analysis of why young adult males are dropping out of the labor force, that analysis would likely be authored by CEA, submitted through a process led by NEC. If on the other hand the president wished to develop a policy to increase labor force participation, NEC would lead that process with the CEA chair as one of the participants.”
In this sense, the NEC is more proactive with respect to policy, further in the details, and generally has greater oversight of the day-to-day formulation of policy. In short, it’s an important position. What, then, do we know about Hassett’s view on AI? Not much, unfortunately. However, in a January 2024 interview on Fox News, he makes an awkward segue – being questioned on Biden’s surprisingly strong Q4 2023 economy – to a prediction that AI could lead to a 3% growth in annual productivity. In making the prediction, he references the work of Stanford Professor Erik Brynjolfsson, who has worked extensively on the economic impacts of AI. It’s difficult to draw many concrete conclusions from this interview alone, but–broadly speaking–we can assume that Hassett is familiar with the basic economic arguments around transformative AI, and has come out (at least rhetorically) bullish.
Bonus Reads:
Donald Trump is allegedly considering naming an AI czar in the White House. ‘Czar’ is typically a fairly wishy-washy position with a limited mandate, so the true importance of this move remains to be seen. That said, it should still be viewed as discursively significant.
A translated interview with the CEO of Chinese AI startup Deepseek. A relative newcomer in the AI world, Deepseek’s latest model beat OpenAI’s o1 on several benchmarks.
PrimeIntellect have completed a decentralized training run of a 10B parameter model. As Jack Clark describes, this has far-reaching implications for states’ abilities to regulate compute, much of which is dependent on the assumption of centralized training.
Amarda Shehu, Jesse Kirkpatrick, and J.P. Singh of George Mason University, share their thoughts on the U.S. National Security Memorandum on AI, released in October.
A handful of AI- and tech-focused organizations co-signed an open letter to Congress, urging “swift and decisive action” during the remaining two months of the 118th Congress–the so-called “lame duck” period, where lawmakers are expected to be relatively inactive–to pass legislation that “[promotes] responsible artificial intelligence.”
Joshua Levine discusses ongoing copyright lawsuits against AI developers, and their potential impact on American AI innovation.



