Watchdogs Release ‘OpenAI Files’ Amid $200M Defense Deal
Last week in AI Policy #22 - June 23, 2025
Delivered to your inbox every Monday, Last Week in AI Policy is a rundown of the previous week’s happenings in AI governance in America. News, articles, opinion pieces, and more, to bring you up to speed for the week ahead.
Policy
Two non-profit tech watchdogs, the Midas Project and the Tech Oversight Project, released the files on Wednesday.
The files are a comprehensive collection of concerns relating to “governance practices, leadership integrity, and organizational culture at OpenAI.”
The report breaks down their findings into four main areas of concern.
Restructuring: The report claims that OpenAI plans to remove limits on investor returns, drastically reduce the nonprofit board’s influence, and abandon the structure it was founded on.
CEO Integrity: The report questions Sam Altman’s character, citing previous instances in which senior employees attempted to remove him, his alleged track record of deceit, and the fact that he signed documents that allowed OpenAI to revoke employees’ vested equity if they did not sign restrictive NDAs.
Transparency & Safety: Rushed safety evaluations and a culture of recklessness and secrecy are among the claims leveled at OpenAI by the report.
Conflicts of Interest: This section relates to undisclosed board conflicts and Altman’s financial ties, with no recusals announced ahead of a major restructuring that could unlock billions in new investment.
The full report provides detailed analysis on each of these areas of concern and you can read it here.
This report was published in the same week that OpenAI was awarded a $200 million defense contract that OpenAI say will “prototype how frontier AI can transform its administrative operations”.
OpenAI announced this partnership as part of their newest initiative, ‘OpenAI for Government’.
Sen. Cynthia Lummis (R-WY) introduced the RISE Act last week.
The bill seeks to provide civil liability immunity to AI developers on the condition that developers release and maintain a model card, thereby incentivizing greater transparency.
It also instructs developers to provide ‘clear and conspicuous documentation… describing the known limitations, failure modes, and appropriate domains of use’ for their products.
Under the legislation, developers would have 30 days to update and release the model card following deployment of a new version, or upon discovering new failure modes of existing models.
The AI Futures Project, authors of AI 2027, were consulted on drafts of the bill and have tentatively endorsed it, with Daniel Kokotajlo stating that the bill ‘does not go far enough’ with regards to transparency.
You can read his analysis here.
On the same day as this legislation was introduced, the RAISE Act also passed the New York legislature.
The RAISE Act also focuses on transparency, but it's more stick than carrot.
Instead of rewarding developers for being more transparent, the bill would allow the New York AG to bring civil penalties against developers who did not comply with safety protocols.
Critics of the bill say it will be a major setback for New York start-ups, though its authors claim it only affects companies who have “spent over $100 million in computational resources to train advanced AI models”.
It has also been compared to California’s SB 1047 bill that Governor Newsom vetoed last year, citing its potential “chilling effect” on tech innovation.
Anthropic’s Jack Clark also commented on the bill, saying it is “overly broad/unclear in some of its key definitions” and echoing the idea that its penalties would be too harsh on smaller firms.
AI Moratorium Referred to Senate Parliamentarian for Byrd Rule Review
The battle over states’ rights to regulate AI continued this week as Senate Commerce ranking member Maria Cantwell (D-WA) submitted materials to the Senate parliamentarian Wednesday, escalating the case that the moratorium provision is not fit for inclusion in the reconciliation bill.
Many have cast doubt on whether this clause will make it through the Senate, citing violation of the Byrd Rule.
Over time, it has drawn increasing scepticism, revealing a divide within the GOP between factions who champion states' rights and others more concerned about overly restrictive state-level AI regulation.
Some GOP Representatives, including Rep. Marjorie Taylor Greene (R-Ga), who voted the bill through the House, have even said they would vote no if the bill came back to House including the moratorium.
Sen. Ted Cruz (R-T) has since attempted to make the provision Byrd compliant by tying it to BEAD funding.
Now, the Senate parliamentarian will decide the fate of state level AI governance for at least the next decade.
And while their role is technically advisory, their decision is generally treated as final and has only been effectively overruled once—when parliamentarian Robert Dove was dismissed in 2001.
California Policy Group Publishes Full Report on AI Governance
The Joint California Policy Working Group on AI Frontier Models has released their full report outlining the policy principles that should guide AI governance in the coming years.
The report, co-authored and co-led by prominent AI experts including Fei-Fei Li, was requested by Governor Newsom in 2024 in an attempt to develop an effective AI policy strategy that shapes national standards.
Among the core policy principles are:
A “trust but verify” ethos. Transparency, oversight, and adaptability.
Evidence-based policymaking using diverse analytical methods beyond observed harms.
Early design choices in technology can set long-term trajectories, warranting caution.
The report doesn’t advance specific policies, but makes some broad recommendations about policy direction.
Transparency mandates for AI developers, with whistleblower protections and public disclosures at the forefront.
Third-party evaluations and adverse event reporting to track real-world impacts.
Policy thresholds (e.g., by compute or user reach) should be flexible and goals-driven instead of crude and unnecessarily punitive.
Central to the report is an attempt to articulate how policy can align incentives to both safeguard the public and get the most out of AI.
You can read the full report here.
Press Clips
Anton Leicht on 'AI middle powers' ✍
Eva Dou featured on China Talk to discuss her recent book about the evolution of Huawei 🔉
Margaret Woolley Bussey, Executive Director of the Utah Department of Commerce joined the NatSec podcast to discuss Utah's establishment of an Office of AI Policy 🔉
Jeremy Chang, CEO and Director of Economic Security research at DSET (Research Institute for Democracy, Society, and Emerging Technology) sat down with the Strait Forward podcast to discuss economic security, Taiwan's economic relationship with China, and more 🔉
Epoch AI’s Anson Ho and Arden Berg on the efficacy of biorisk evaluations ✍
Miles Brundage proposes three essential principles for AI governance ✍



