Delivered to your inbox every Monday morning, Last Week in AI Policy is a rundown of the previous week’s happenings in AI policy and governance in America. News, articles, opinion pieces, and more, to bring you up to speed for the week ahead.
Policy
Quote: “To accelerate delivery of war winning capabilities, the Secretary of the Army is directed to… Enable AI-driven command and control at Theater, Corps, and Division headquarters by 2027.”
So, militaries are seemingly already feeling the pressure toward military AI uptake, so as to maintain the balance of power.
Trump officials looked to overhaul of Biden‑era AI‑chip export controls.
These changes would scrap the three‑tier country structure and replace it with bilateral licensing deals, turning access to U.S. chips into a bargaining chip in broader trade negotiations.
Tier 1 (17 countries + Taiwan) enjoys unlimited supply; Tier 2 (~120 countries) faces quantitative caps; Tier 3 adversaries—including China, Russia, Iran, North Korea—are fully blocked.
The January rule, which activates May 15, seeks to confine top‑end compute power and certain model weights to the United States and close allies; companies are racing to align with its terms.
Low‑volume exemption under review: officials may slash the “no‑license” threshold from 1,700 to 500 Nvidia‑equivalent H100 chips, forcing more orders into the licensing queue.
Commerce Secretary Howard Lutnick wants export controls embedded in trade deals, aligning with President Trump’s deal‑centric approach.
Wilbur Ross calls the rewrite “a work in progress” and confirms elimination of tiers is on the table.
Oracle, Nvidia, and Senate Republicans oppose the current rule, warning that strict limits will push Tier 2 buyers toward cheaper, unregulated Chinese alternatives.
Seven GOP senators have asked Commerce to withdraw the framework.
Bespoke government‑to‑government agreements may replace clear global categories with new uncertainty for exporters.
Roughly 250 specialists recruited across federal agencies in late 2023–24 have fallen since the administration dismissed probationary and term staff. Of the original 250, only about 25 remain.
Elon Musk’s Department of Government Efficiency drove the cuts, firing hundreds of recent technology hires and subsuming the U.S. Digital Service. General Services Administration (GSA) later shuttered its 18F tech shop.
It’s unclear how the changes align with the White House ambitions around AI, particularly to expand AI talent in government:
Since January the White House has issued three executive orders on artificial intelligence and instructed every agency to expand AI hiring, yet the workforce underpinning those goals has been thinned.
Office of Management and Budget memo M‑25‑21 now urges agencies to “accelerate” AI adoption and prioritize recruiting seasoned practitioners, highlighting the gap the layoffs created.
Former officials warn of higher costs and slower modernization: agencies may need to buy expertise from contractors after losing internal staff who had cut Social‑Security wait times, simplified tax filing, and improved veterans’ care.
Recruiters fear reputational damage: Angelica Quirarte—who led the previous surge and resigned 23 days into the new term—says chaotic dismissals will deter technologists from public service.
Center for AI Policy called for rebuilding capacity, arguing specialized government expertise is indispensable for both day‑to‑day service delivery and preparation for advanced AI systems that could reshape national security.
The bill, requiring platforms to delete AI‑generated intimate images posted without consent, passed the House 409‑2 and now awaits President Trump’s signature.
It’s the first federal rule focused on AI content harms, after years of stalled efforts to police online speech and regulate artificial‑intelligence risks.
Meta, X, TikTok, Snapchat and even NetChoice withheld opposition.
Conservative backing proved decisive.
Senate Commerce Chair Ted Cruz shepherded the bill while Heritage Action and other right‑leaning groups endorsed it, giving Republicans a clear policy win.
Americans for Responsible Innovation called the vote “the first crack in the dam,” suggesting industry can no longer block every tech bill.
Washington pressed Brussels to pause its AI “code of practice.”
In a letter to the European Commission, the U.S. Mission to the EU asked member states to defer the voluntary guidelines tied to the AI Act.
U.S. officials argue the draft duties exceed the statute’s scope and could penalize American firms, mirroring President Trump’s Davos accusation that EU regulation is “a form of taxation.”
The AI Act allows fines of up to 7% of a company’s global revenue—and 3 % specifically for high‑risk‑model developers—if requirements are breached.
Some commentary:
House Judiciary Chair Jim Jordan says Brussels’ digital agenda could infringe Americans’ free‑speech rights, adding political weight to the diplomatic protest.
Meta’s Joel Kaplan labels the draft “unworkable,” and Alphabet objects to clauses on copyright enforcement and independent model testing.
Commission spokesman Thomas Regnier confirms receipt of the U.S. letter; Brussels still plans to publish a final code next month pending institutional sign‑off.
The administration proposes sending U.S. experts to work with EU counterparts before any phased rollout proceeds.
Press Clips
Silicon Valley comes to Washington. ✍️
Anthropic have updated their Economic Index. ✍️
Alexandr Wang on securing American leadership in AI. 📽️
How Huawei’s chip gains highlight China’s growing edge in the U.S.–China tech race. 📽️
Why Chinese AI will match America’s from Lennart Heim. ✍️
OpenAI rolls back sycophantic 4o. ✍️
Brian Merchant says the AI Jobs crisis is already here.
Today’s AIs are hyper persuasive. ✍️
Nathan Lambert thinks that AI 2027 isn’t going to come true. ✍️📽️
Anil Seth on the illusion of AI consciousness. ✍️
o3 has superhuman Geoguessr abilities, Kelsey Piper and Scott Alexander discuss. ✍️
Jack Clark on eschatological AI policy. ✍️
CSIS: Now is not the time to disarm the National Science Foundation. ✍️📽️
A college student in charge of using AI to re-write regulation at HUD, with DOGE. ✍️



