Last Week, This Week #1 - November 25, 2024
Manhattan Project-for-AGI achieved internally, hawkish Trump nominations, FLOPS limits, and more.
Delivered to your inbox every Monday morning, LWTW is a rundown of the previous week’s happenings in AI governance in America. News, articles, opinion pieces, and more, to bring you up to speed for the week ahead.
Note: This is the first edition of LWTW since the presidential election. Certain items included below are from beyond the previous week, so as to be comprehensive.
The U.S.-China Economic and Security Review Commission published their 2024 Annual Report to Congress. In what is surely the most direct acknowledgement to date of U.S.-China competition over AI, the first of the Commission’s recommendations was that “Congress establish and fund a Manhattan Project-like program dedicated to racing to and acquiring an Artificial General Intelligence (AGI) capability”, echoing Leopold Aschenbrenner’s essay Situational Awareness, earlier this year. However, as Dean Ball points out, the term AGI is not mentioned anywhere else throughout the documents 800 pages. This suggests the headline recommendation is less a serious policy prescription, and more of an attempt to “squeeze this idea into the Overton Window”, and gauge the public reaction.
Trump national security appointees signal a new hawkish China policy (Cate Cadell and Ellen Nakashima, Washington Post). The President-elect continues his appointments apace, with several appointments signalling a more hardline stance on China. Rep. Elise Stefanik (R-NY) – a harsh critic of China and the sponsor of various pieces of tough-on-China legislation. has been nominated to serve as ambassador to the United Nations. Rep. Mike Waltz (R-FL) as national security advisor and Sen. Marco Rubio (FL) suggest a similar stance. In the article, Eric Sayers, adjunct fellow at the American Enterprise Institute, argues that this may materialize as “more export controls, more training in Taiwan, deploying more ground-based missile units to Japan, and expansion of the limits on U.S. investment in China.”
In Wired, Zeyi Yang describes a possible tightening in restrictions on US investment in Chinese AI startups. Restrictions were finalized by the treasury last month, to come into effect in January. They may subsequently be expanded under the new administration.
In the Wall Street Journal, economist Jason Furman proposes a regulatory framework for AI that balances safety and innovation. He outlines six principles for regulating AI:
Balance benefits and risks (duh).
When considering whether to adopt AI, consider the outcomes it produces relative to humans (think, for example, of driverless cars).
Think about how existing regulations hinder progress (I already do, believe me).
Use existing, domain-specific regulators, and resist the temptation to create a new superregulator.
Don’t allow regulation to become a moat that protects incumbents.
Not every problem caused by AI can be solved by regulating AI.
Vox’s Future Perfect published their 2024 Future Perfect 50, a list of 50 notable individuals working on the world’s most pressing problems. This list includes California State Senator Scott Wiener, architect of the SB 1047 bill – vetoed by Governor Gavin Newsom last month – which sought to regulate frontier AI models. It also includes Dan Hendrycks, machine learning researcher, founder of the Center for AI Safety (CAIS), and advisor to Elon Musk’s xAI. CAIS is also thought to have played a significant role in shaping SB 1047, and its lobbying arm co-sponsored it.
Bonus Reads:



