Last Week, This Week #3 - December 10, 2024
Several new pieces of legislation for end-use regulation of AI have been introduced in recent days.
Delivered to your inbox every Monday* morning, LWTW is a rundown of the previous week’s happenings in AI governance in America. News, articles, opinion pieces, and more, to bring you up to speed for the week ahead.
*Sometimes we’re a day late. Oops.
President-elect Trump announced David Sacks as his AI and crypto czar. Sacks is an entrepreneur, investor, former PayPal COO (and member of the so-called “PayPal Mafia”), and co-host of the All-In Podcast. He has been a vocal supporter of Trump and a consistent critic of the Biden administration, particularly on its policy toward Ukraine. It remains to be seen how meaningful the appointment is. For one, it appears to be part-time. Second, it’s possible that the appointment is mainly chum in the water for the crypto community, with an AI role tacked on. That said, Sacks has been outspoken on the AI industry. He described OpenAI’s manoeuvre out of its nonprofit structure as its becoming a “piranha for-profit”. Further, in a now-deleted tweet, Sacks said the following:
"I’m all in favor of accelerating technological progress, but there is something unsettling about the way OpenAI explicitly declares its mission to be the creation of AGI.
AI is a wonderful tool for the betterment of humanity; AGI is a potential successor species.
By the way, I doubt OpenAI would be subject to so many attacks from the safety movement if it wasn’t constantly declaring its outright intention to create AGI.
To the extent the mission produces extra motivation for the team to ship good products, it’s a positive. To the extent it might actually succeed, it’s a reason for concern. Since it’s hard to assess the likelihood or risk of AGI, most investors just think about the former."
Mark Zuckerberg attempts to position himself favorably with the incoming Trump administration. Through his president of global affairs–former British Liberal Democrat leader Nick Clegg–the Meta chief attempts to strike a conciliatory tone, claiming that it “overdid it a bit” when moderating content during the COVID-19 outbreak. In another move seemingly designed to pacify the President-elect, and avoid being iced out of its administration, the Facebook founder went further and explicitly linked the removal of content to pressure from the Biden administration. Zuckerberg is not the only Big Tech CEO concerned about the way the wind is blowing. Jeff Bezos recently intervened to prevent the Washington Post making an official endorsement in the recent presidential election, amid a series of Kamala Harris endorsements by national media. The question remains of how much Big Tech’s role and influence can prevail, in an administration so seemingly hostile to it.
Senate votes to approve the TAKE IT DOWN Act. This bill is the latest in a series of legislation (including the TRAIN Act, introduced by Senator Peter Welch (D-VT)) that seeks to regulate the usage and development of generative AI models. In a recent press release, the Senate Committee on Commerce, Science, and Transportation describe how the bill would “criminalize the publication of non-consensual intimate imagery (NCII), including AI-generated NCII (or “deepfake pornography”), and require social media and similar websites to have in place procedures to remove such content upon notification from a victim.” The legislation was introduced by Senator Ted Cruz (R-TX), who has been quick to weigh in on AI and technology policy. In a recent letter to Attorney General Merrick Garland, the Texas senator questioned whether foreign organizations, including the Centre for AI Governance, were acting illegally in their
Americans for Responsible Innovation (ARI), a policy advocacy and lobbying organization, have published a letter to the House of Representatives leadership, calling for the reconstitution of the House Select Committee on the CCP. In recent weeks and months the Select Committee has held meetings and hearings with Anduril executives, investigated American universities for their close ties to Chinese organizations within the orbit of the CCP, and collaborated with the Center for Strategic and International Studies to run interactive simulations of the American defense industrial base in the event of conflict centred around Taiwan. Unlike permanent committees, select committees must be reconstituted by majority vote with every new Congress. Whether the Select Committee would survive the transition to a Democratic Majority House was an outstanding question. In light of the Republican’s trifecta sweep, its prospects look markedly sunnier.
Senators Elizabeth Warren (D-MA) and Eric Schmitt (R-MO) introduced S. 5463, the Protecting AI and Cloud Competition in Defense Act. The legislation would mandate more competitive bidding for cloud and AI contracts. The legislation targets major tech companies like Google, Microsoft, Amazon, and Oracle, seeking to ensure the Department of Defense maintains exclusive data rights, implements robust security measures, and prevents market concentration. With provisions for strict vendor compliance, potential penalties, and annual market competition reports, the bill represents a significant effort to reshape government technology procurement.
The House Financial Services Committee looks at regulating the use of AI in the financial services industry and housing sector. Representatives Maxine Waters (D-CA) and Patrick McHenry (R-NC) have introduced a resolution to recognize the increasing use of artificial intelligence in both sectors, as well as a bill that would require the Securities and Exchange Commission (SEC) to study the “realized and potential benefits of artificial intelligence” in the sector, and publish the results within six months. Congressional resolutions are non-binding and do not have the force of law. This bill–which does have the power of law–simply requires the SEC to produce a fairly open-ended report within a generous amount of time. So, a double-nothingburger.
Bonus Reads:
Did the Biden administration succumb to the hype of AI? Bill Drexel argues for a more tempered approach from the new administration.
NVIDIA’s Josh Parker joins the Special Competitive Studies Project podcast for an episode on AI, data centers, and energy efficiency.
OpenAI’s new model tried to avoid being shut down by making a copy of itself, according to a report published by the company. However, the model was explicitly instructed to pursue its goal at all costs, and only attempted to avoid shutdown in ~5% of cases, so this may not be such a stark case of misalignment.
In a recent episode of Google DeepMind’s podcast, the company’s Director of Public Policy Nicklas Lundblad discusses the balancing of regulation and innovation in AI.
Noah Smith argues that the manufacturing is now a key battleground in geopolitical power struggles. In this context, China is “unleashing a massive and unprecedented amount of industrial policy spending — in the form of cheap bank loans, tax credits, and direct subsidies — to raise production in militarily useful manufacturing industries like autos, batteries, electronics, chemicals, ships, aircraft, drones, and foundational semiconductors.” Spot the AI-relevant technologies.
Unidentified drones have been reported in at least 10 counties in New Jersey the state since mid-November.
Are tales of AI doom just the fanciful tales of market incumbents looking at a regulation as means to cement their monopoly? Garrison Lovely considers this idea.
Ethics won’t save us from AI, says R.J. Snell. What will?



