Delivered to your inbox every Monday, Press Clips is a rundown of the previous week’s happenings in politics and technology in America. News, opinion, podcasts, and more, to bring you up to speed for the week ahead.
Policy
AI Moratorium dropped from reconciliation bill following bipartisan pressure
The Republican-led U.S. Senate voted to strike the proposed decade-long ban on state level AI regulation on Tuesday.
As reported, the provision included in Trump’s original ‘Big Beautiful Bill’ intended to prevent states from passing or enacting any legislation that regulates AI in an attempt to prevent a “patchwork” policy landscape that proponents argue weakens the US’ prospects in the race for AI dominance.
The moratorium provoked widespread, bipartisan opposition on the grounds that it violated the Byrd rule, as well as concerns that it undermined existing legislation enacted by states.
Following this backlash and a review by the Senate Parliamentarian, a diluted version was presented to senators, which tied compliance with the legislation to the $500 million newly proposed BEAD funding.
Further edits were made after lead moratorium proponent Sen. Ted Cruz (R-TX) and skeptic Sen. Marsha Blackburn (R-TN) agreed on less restrictive measures and the reduction to a 5-year moratorium
Despite these revisions, Senators still voted 99-1 on an amendment put forward by Senator Blackburn to remove the provision during Tuesday’s vote-a-rama.
Blackburn commented that she would not support any kind of prohibition on state-level AI regulation until there was equivalent protection for citizens at the federal level.
The Senate passed the amended reconciliation bill on a narrow 51-50 vote, and the House followed suit by passing the untouched bill on a 218-214 vote on Thursday afternoon.
Moves toward federal pre-emption of AI governance are unlikely to go away though, given the outspoken backing from leading Republicans and key tech players, such as Google and OpenAI.
In addition to this, recent polling found that most Americans support a national standard on AI regulation and believe that the patchwork state approach will put the US at a disadvantage to China.
Pew polling shows, however, that both the public and AI experts are more concerned that the government won’t go far enough in regulating AI—a more likely outcome if regulation is left to the federal government, which has to date emphasized innovation over regulation.
The Atlantic Council released a report warning of overlooked defense risks in civil AI regulation
A report from the Atlantic Council, released on June 30, warns that civil AI regulation could have significant unintended consequences for defense and national security.
Written by Deborah Cheverton, the paper argues that the defense community can no longer afford to assume that regulatory carve-outs will shield it from the downstream effects of regulation built for civilian use.
The report identifies three key areas of concern:
Market-shaping regulation that limits access to cutting-edge tools
Judicial interpretations that restrict defense use cases
Compliance burdens that add cost and risk to deployment.
The report explains that while most frameworks formally exclude defense, the dual-use nature of AI systems means those boundaries are “porous at best.”
In other words, even when military applications are carved out of civil regulation, restrictions on shared technologies—like foundation models or training data—can still spill over and constrain defense use indirectly.
The report pays particular attention to general-purpose AI models. As regulation of these models tightens, defense agencies risk being constrained in their ability to adopt systems that were not explicitly designed for military contexts.
The report notes that “regulations placed on the general-purpose AI systems that underpin sector-specific applications could impact the capabilities available to defense and national security users, even if those use cases are themselves technically exempt.”
The report urges proactive engagement from the defense community. It recommends that national security leaders identify and support specific civil initiatives—such as safety testing tools and cross-border data standards—that could be adapted for defense use.
Without that involvement, the report warns, the defense community could find itself “locked out” of key innovations or saddled with compliance regimes designed without its input.
You can read the full report here.
60+ organizations signed pledge on White House AI education investment plans
Over 60 tech and educational organizations have backed the White House’s AI ‘Pledge to America’s Youth’.
Organizations signing the pledge include Amazon, Apple, Google, Meta, Microsoft, and education centered organizations Cognizant and Cengage.
Under the pledge, funding and grants for AI technology will be made available to schools and educators to provide them with the necessary tools to prepare students for the future job market.
Our Pledge to America’s Youth: Investing in AI Education. (June 30, 2025) (Source)
The Administration views the pledge as part of a broader effort to secure America’s position as the global leader in AI.
Michael Kratsios, Director of the White House Office of Science and Technology Policy and Chair of the White House Task Force on AI Education remarked, “fostering young people’s interest and expertise in artificial intelligence is crucial to maintaining American technological dominance.”
“The resources and tools that have been pledged through this initiative will help our teachers and learners leverage AI in classrooms and communities across America,” commented Secretary of Education Linda McMahon, highlighting the initiative’s national reach.
Press Clips
Anton Leicht provides a skeptical account of California’s private AI governance bill, SB-813 ✍
RAND’s Zachary Burdette and Hiwot Demelash explore how the race for AGI could increase the risk of preventative war ✍
Interconnect’s Nathan Lambert advocates for an American DeepSeek Project ✍
Striking a similar chord, Epoch AI’s Arden Berg and Anson Ho assess what an AI Manhattan Project would look like ✍
ChinaTalk’s Jordan Schenider sits down with Peter Harrell, Matt Klein, and Kevin Xu for a second installment of ‘Is America Cooked?’ 🔉
Toby Ord on graphs AI companies would prefer you didn’t (fully) understand 📽




