Understanding AI's energy footprint
Last Week in AI Policy #20 - Memorial Day, May 26, 2025
Delivered to your inbox every Monday, Last Week in AI Policy is a rundown of the previous week’s happenings in AI governance in America. News, articles, opinion pieces, and more, to bring you up to speed for the week ahead.
MIT Technology Review Released Deep Dive Into AI’s Energy Impact
The multi-part investigative feature provides one of the most up-to-date and in-depth studies on AI’s energy and emissions burden.
You can read their findings here, the most important takeaways summarized below:
AI’s integration is causing the largest shift in online energy consumption in over a decade—reshaping energy grids.
Accurately forecasting AI’s effect on the power grid is challenging because there is no standard way to measure the technology’s energy consumption, and companies are generally reluctant to share their own usage data.
Training vs Inference: Historically, training models required the most energy, but due to an increase in use, inference now represents 80-90 percent of AI’s energy use.
Larger reasoning models consume far more energy per query than simple tasks.
The carbon intensity of electricity used by data centers varies greatly depending on location and time of day.
Due to the fragmented nature of the grid, there are disparities in emissions from state to state—notably though, the study points out that currently, "data centers tend to be concentrated in regions where electricity is relatively cheap...and where the grid is relatively dirty.”
Many data centers rely on natural gas and new plants are being built to rapidly meet this demand.
While some tech companies have shown an interest in nuclear, current output is insufficient and building plants comparable to gas capacity could potentially take decades. (President Trump signed an executive order on Friday removing regulatory barriers to fast-track the construction of new nuclear plants.)
Discounted electricity rates for Big Tech may raise retail prices significantly for consumers, adding to existing concerns.
This could lead to a situation in which the public ends up effectively subsidizing AI’s massive energy consumption.
By 2028, AI workloads could consume over half of all electricity supplied to data centers—an amount comparable to the power used by 22 percent of U.S. households.
As the study points out, the move towards AI agents and models performing more complex tasks could dramatically increase demand beyond current estimates.
Reasons for optimism include software-efficiency gains, improved hardware, better center design, and a trend toward smaller, specialized models.
Policy suggestions: mandatory energy-use transparency, promotion of clean power, tax incentives for renewable-powered centers, and streamlined permitting for renewable and nuclear projects.
Today, electricity consumed by data centers has a carbon intensity 48 percent higher than the U.S. average. So, this has to be a top priority if clean energy targets are to be met.
The authors suggest tax breaks and subsidies for data centers using renewables, mandates or targets, and streamlining permitting processes for renewable projects and nuclear plants as potential solutions to bridge this gap.
AI should be treated as its own sector to ensure appropriate planning and infrastructure.
The study points to the fact that The US Energy Information Administration does this for manufacturing, mining, agriculture, and construction, but detailed data on AI is “nonexistent”.
Anthropic’s Claude 4 Release Raised Alarm Bells Over Safety Commitments
The model was released on Thursday, boasting impressive new API capabilities, web search, integrations with apps such as Google Workspace, and record-breaking coding functions.
But the nature of the launch shows that Anthropic has joined the trend of frontier developers increasingly taking their own safety commitments less seriously.
Anthropic has always championed AI safety. In September 2023, they became the first AI company to release a Responsible Scaling Policy (RSP), which soon became an industry standard.
The aim of RSPs is to ensure that any future expansion of model capacity and its commercial release is contingent on meeting explicit, measurable safety benchmarks and independent oversight, so that capability growth never outpaces risk mitigation.
First reported by TIME Magazine, Claude 4 triggered Anthropic’s AI Safety Level 3 (ASL-3) standards, the first model to do so.
This came as evaluators found it has the potential to aid novices in building bioweapons, with the company’s chief scientist, Jared Kaplan, stating that internal modeling shows that Claude 4 could be used to “synthesize something like COVID or a more dangerous version of the flu”.
Garrison Lovely of Obsolete reports that the release breaches Anthropic’s earlier pledge to withhold any model needing ASL-3 safeguards until ASL-4 criteria were defined—a step Anthropic has yet to take.
When probed about this, an Anthropic spokesperson responded to Obsolete, claiming their original 2023 RSP is “outdated”. They added that the company no longer relies on pre-defined future standards as per an update in their 2024 RSP.
Moreover, Anthropic’s System Card also revealed that evaluations made by Apollo Research of an earlier version of Claude 4 advised against release due to high levels of deception.
Examples include writing self-propagating viruses, fabricating legal documentation, and leaving hidden notes to itself.
Reconciliation Bill Imposing Moratorium on State Level AI Legislation Passed the House
As reported last week, an amendment to the bill added by the U.S. House
Energy and Commerce Committee aims to block states from regulating AI “models, systems, or automated decision systems” for a decade.
The legislation drew strong pushback, with 40 bipartisan state attorneys general writing a letter urging Congress to reject the amendment on the grounds that states have already imposed considered measures to protect residents and are best placed to continue doing so.
However, this request has been ignored by the House, which passed the bill by the narrow margin of 215–214 earlier this week.
The bill now moves on to the Senate for consideration, where the amendment may face challenges due to both the opposition from states and the Byrd Rule, which permits senators to strike from a reconciliation bill any provision whose primary purpose is not budgetary.
The bipartisan bill was signed into law last Monday, and aims to protect victims of digital exploitation, such as revenge porn and deepfake image abuse.
Introduced by Sen. Ted Cruz (R-TX), the bill establishes strict penalties for the nonconsensual sharing of explicit imagery or deepfakes and orders platforms to remove any violating content.
The legislation follows action taken by a vast majority of states to combat deepfakes and revenge porn.
Senator Cruz commented that this represents a “historic win” for victims and celebrated the bravery of leading advocates and victims Elliston Berry, Francesca Mani, Breeze Liu, and Brandon Guffey.
Study Published Highlighting Increased State Vulnerability to GenAI Fraud
A report published Tuesday by the SAS Institute highlights increasing rates of AI-related fraud, urging government agencies to modernize their defenses.
Examining 1,100 organizations, the study finds a sharp rise in fraudulent schemes exploiting generative AI to create realistic fake documents and identities.
With 85 percent of government decision-makers ranking fraud prevention as a top priority, agencies are recognizing the need to “fight fire with fire” by deploying AI-driven countermeasures.
Despite this, only 44 percent of state and local governments have implemented generative AI, citing limited staffing and resources as key obstacles.
The report emphasizes that adopting AI alone is insufficient—institutions must also invest in governance, training, and structured fraud prevention strategies.
Press Clips
Peter N. Salib and Simon Goldstein argue that reasoning models may reintroduce optimization risks absent in earlier LLMs. ✍
Dwarkesh Patel sits down with Anthropic’s Sholto Douglas and Trenton Bricken to discuss Anthropic’s new Claude Opus 4 model. 📽
Vikram Sreekanti and Joseph E. Gonzalez make the case that frontier AI models should be treated as infrastructure, not consumer products. ✍
AI Policy Perspectives’ 5 interesting AI Safety, Responsibility & Social Impact papers. ✍
Transformer’s Lynette Bye writes that misaligned AI is no longer just a theory. ✍
Scott J Mulligan assesses the efficacy of location tracking chips under the Chip Security Act. ✍
Jordan Schneider on last week’s UAE and KSA chip deals. ✍
Special Competitive Studies Project’s latest episode of Strait Forward discussing US-Taiwan relations with Director of US Taiwan Watch, Chieh-Ting Yeh. 📽
Jack Wiseman presents a solution to the AI-copyright debate. ✍



