Delivered to your inbox every Monday, Press Clips is a rundown of the previous week’s happenings in politics and technology in America. News, opinion, podcasts, and more, to bring you up to speed for the week ahead.
Multiple releases from leading AI developers saw the companies abandon previous regulatory and safety commitments. Google released Gemini 2.5 Pro, a million-token context window reasoning model which appears to beat out other leading models across a number of benchmarks, and is—according to some—the new gold standard for models.
However, as reported in Transformer, Google has failed to make good on its public commitments to safety and transparency in its new releases, choosing not to release a system card, despite their commitment to “publicly report model or system capabilities, limitations, and domains of appropriate and inappropriate use” for all major model releases.
OpenAI also made headlines with the release of their new 4o image model. Based on a change in model architecture from its previous DALL-E, this new model is considerably better at generating text, as well as a range of image styles and other functionality. This was demonstrated most notably in the Ghibli frenzy, with users recreating their own images in the style of the Japanese animation company. On Thursday, the White House tweeted a Ghibli version of a real image of a convicted drug dealer being detained by ICE.
In releasing the model, OpenAI is taking an explicitly hands-off approach to content moderation. “AI lab employees should not be the arbiters of what people should and shouldn’t be allowed to create,” said Joanne Jang, OpenAI’s model behavior lead.
Federal Government
National Institute of Standards and Technology (NIST) publishes report on adversarial machine learning.
NIST has released its final report on adversarial machine learning, offering a comprehensive taxonomy and shared terminology for classifying attacks on predictive and generative AI systems.
The report categorizes threats by system type, model lifecycle stage, and attacker capabilities, detailing vulnerabilities such as data poisoning, prompt injection, evasion, and model extraction.
Proposed mitigations include adversarial training, data sanitization, input validation, output monitoring, and red teaming, though NIST emphasizes the limitations of current defenses.
House Energy Subcommittee holds hearing on energy, AI, and national security
At a House Energy Subcommittee hearing on March 25, Chairman Bob Latta (R-OH) warned that the U.S. is facing a worsening electric grid reliability crisis, driven by accelerating retirements of baseload power and surging electricity demand.
He cited a NERC forecast projecting 52 gigawatts of generation capacity—equivalent to 40 nuclear plants—will retire within four years, while new demand, particularly from AI development and domestic manufacturing, strains the system.
Latta criticized federal policies, including the Clean Power Plan 2.0 and subsidies for intermittent energy sources, for undermining the economics of dispatchable baseload generation needed to stabilize the grid.
He also flagged permitting bottlenecks and restrictive state policies as compounding challenges for grid operators tasked with maintaining reliability and resource adequacy.
Select Committee on the Chinese Communist Party celebrates the passage of the DETERRENT Act
The House of Representatives passed H.R. 1048, the DETERRENT Act, a bipartisan bill aimed at countering foreign influence, particularly from China, in U.S. academic institutions.
The legislation lowers the reporting threshold for foreign gifts to U.S. universities from $250,000 to $50,000 — and to $0 for entities from “countries of concern” like China.
It mandates disclosure of foreign contracts with individual faculty at research-focused universities and imposes penalties, including fines and potential loss of federal funding, for noncompliance.
The bill responds to findings of nearly $40 million in unreported research contracts between major U.S. universities and Chinese entities tied to the Chinese Communist Party and its military.
State Government
California State Senator Jerry McNerney (D-Pleasanton) introduces SB813 to establish safety standards for AI.
This bill would create an independent, third-party panels of AI experts and academics to devise safety standards, representing a “third way” in AI governance.
Sen. McNerney also introduced SB 833, which would ensure a human-in-the-loop for AI used in key services such as critical infrastructure, energy, emergency services, and more.
“SB 833… will create commonsense safeguards by putting a human in the loop — human oversight of AI,” said McNerney. “Artificial Intelligence must remain a tool controlled by humans, not the other way around.”
Press Clips
Why the world is looking to ditch US AI models (MIT Technology Review)
A Letter to Michael Kratsios, Director of the White House Office of Science and Technology Policy (White House)
If Anthropic Succeeds, a Nation of Benevolent AI Geniuses Could Be Born (WIRED)
Economics of AI Fellowship (Stripe)
Where We Are Headed (Hyperdimensional)
AGI and the Future of Warfare (ChinaTalk)
Import AI 405: What if the timelines are correct? (Import AI)
Is AGI a hoax of Silicon Valley? (AI Supremacy)
Third-wave AI safety needs sociopolitical thinking (Richard Ngo)
U.S.-China Artificial Intelligence Competition: A Conversation with Dr. Jeffrey Ding (ChinaPower)
AI Governance in Action: How Model Cards Enhance Compliance and Risk Management (International Business Times)
Do China and the US need to work together on AI safety? Boao Forum debates (South China Morning Post)
OpenAI peels back ChatGPT’s safeguards around image creation (TechCrunch)
The case that AGI is coming soon (80,000 Hours)
The Strategic Implications of Open-Weight AI (Special Competitive Studies Project)
What jobs today are really most AI vulnerable? (James Pethokoukis)
Gemini 2.5 is the New SoTA (Zvi Mowshowitz)





