Courts Side with Anthropic and Meta in Copyright Rulings
Last week in AI Policy #23 - June 30, 2025
Delivered to your inbox every Monday, Last Week in AI Policy is a rundown of the previous week’s happenings in AI governance in America. News, articles, opinion pieces, and more, to bring you up to speed for the week ahead.
Policy
The two rulings came this week as Anthropic and Meta were at the center of separate lawsuits arguing that generative AI does not qualify as fair use.
While the two rulings are widely seen as a win for generative AI, in both cases the judges held reservations that left the door open for future scrutiny and potential penalties.
Bartz et al. v Anthropic saw three authors sue Anthropic for copyright infringement. The authors alleged that Anthropic used pirated and purchased-and-scanned copies of their books to train Claude.
US District Court judge William Alsup oversaw the case and his decisions drew a firm line between transformative model training and unjustified data acquisition.
The court ruled that using copyrighted works to train Claude was “exceedingly transformative”, and as such constitutes fair use.
The court also ruled that scanning purchased books counts as fair use, classifying Anthropic’s actions as private research and making the case that the use of the author’s books was not exploitative and caused no meaningful market harm.
However, the court ruled that Anthropic’s use of pirated books to build its permanent internal library is not protected under fair use.
Co-founder Ben Mann allegedly downloaded Books3, an online library of 196,640 books “that he knew had been assembled from unauthorized copies of copyrighted books.”
Anthropic had the option to purchase the books legally, but chose not to, aiming to avoid what another co-founder and CEO Dario Amodei described as a “legal/practice/business slog,” according to the Fair Use Order.
While Anthropic could ultimately face fines of up to $150,000 per work violated, the case adds to a growing legal precedent that permits generative AI to use legally sourced copyrighted works, despite considerable opposition.
Kadrey et al. v Meta was presided over by US District Court Judge Vince Chhabria, who issued a Summary Judgment in favor of Meta, bypassing the need for a jury.
Much like Judge Alsup, Judge Chhabria justified his ruling by classifying Meta’s practices as transformative. However, he made it clear that his fair use ruling was in response to the specific case presented to him by the plaintiffs and not a judgment on whether GenAI presents as copyright infringement more broadly.
“Because the issue of market dilution is so important in this context, had the plaintiffs presented any evidence that a jury could use to find in their favor on the issue, factor four would have needed to go to a jury. Or perhaps the plaintiffs could even have made a strong enough showing to win on the fair use issue at summary judgment.” - US District Court Judge Vince Chhabria
Chhabria also took a casual stance on the relevance of piracy, organising a Zoom meeting to handle these accusations separately.
Conversely, the concept of market harm, or market dilution was a greater focus for Chhabria who claimed that “companies are creating something that often will dramatically undermine the market” by training generative AI with copyrighted works.
Senate Parliamentarian ruled the AI state moratorium Byrd compliant
Last week we reported that Senator Maria Cantwell (D-WA) had submitted materials to the Senate Parliamentarian to review the proposed AI moratorium clause included in the Big Beautiful Bill for violation of the Byrd rule.
Proponent of the moratorium, Sen. Ted Cruz (R-TX) had made an amendment to the bill tying the provision to BEAD funding in an attempt to make it Byrd-compliant.
On Sunday, Senate Parliamentarian Elizabeth MacDonough ruled that this amendment satisfies the Byrd rule.
MacDonough has since communicated a caveat to Sens. Cantwell and Cruz in a meeting on Wednesday.
According to Senator Cantwell, the Parliamentarian has asked Senator Cruz to revise the AI provision so that noncompliant states would forfeit only the newly released BEAD funding from the reconciliation bill, rather than the full $42 billion allocation.
Cruz has made assurances that this is the case, and Senate Commerce Republicans have also indicated that the measures do not affect existing or future tech-neutral legislation, including intellectual property rights.
The moratorium still faces an uphill battle to get through the Senate and House.
It is also worth noting that the bill only got through the House by one vote. Since then, controversy has mounted and Rep. Marjorie Taylor Greene (R-GA) has confirmed that she would vote “no” if it came back to the House with the AI provision remaining.
Reps. Moolenaar (R-MI) and Krishnamoorthi (D-IL) introduced the No Adversarial AI Act
The bipartisan legislation “would prohibit U.S. executive agencies from acquiring or using artificial intelligence developed by companies tied to foreign adversaries like the Chinese Communist Party.” according to the Select Committee on the CCP.
In short, it would bar federal adoption of AI systems such as DeepSeek.
The bill would also:
Create a list of adversary-developed AI systems.
Establish a delisting process through which companies can demonstrate that they are no longer under the control or influence of foreign adversaries.
The legislation represents an extension of existing efforts throughout federal and state government, as well as internationally, to curtail the use and influence of adversarial AI systems.
Meanwhile, Reuters reported this week that DeepSeek’s ties to the Chinese Communist Party are even more extensive than previously understood, and that China is circumventing export controls through Southeast Asian shell companies, according to a senior U.S. State Department official, speaking anonymously.
As competition between the US and China continues to climb, Committee Chairman John Moolenaar (R-MI) commented, “we are in a new Cold War—and AI is the strategic technology at the center.”
Texas introduced bill placing limits on AI use and development
Texas became the latest state to enact restrictions on artificial intelligence with the passage of the Texas Responsible AI Governance Act (TRAIGA).
Signed into law by Governor Greg Abbott on June 22, the Act imposes outright bans on a narrow set of AI uses—including systems built to manipulate human behavior, discriminate, infringe constitutional rights, or generate deepfakes.
While TRAIGA avoids sweeping obligations proposed in earlier drafts, it introduces a regulatory sandbox, an AI advisory council, and targeted enforcement powers for the Attorney General.
Companies have a 60-day window to address violations, and developers are shielded from liability for misuse by end users.
The Act applies broadly to any system used by Texas residents, and its stated aim is to enable “responsible development of AI.”
The law must be “broadly construed and applied,” according to its text—a phrase that could hint at wider future interpretation.
An earlier, more expansive version of the bill included mandatory risk assessments for AI developers, third-party evaluation, and private right of action, meaning individuals could sue companies over AI harms.
The bill is scheduled to take effect on January 1, 2026—depending on the fate of the moratorium, of course.
Press Clips
Andrew Stokols on ‘China’s Diverging AI Path’ (ChinaTalk) ✍
SemiAnalysis takes a look at AI’s impact on the grid ✍
Obsolete’s Garrison Lovely writes that Tech is taking a gamble on the moratorium ✍
Washington wakes up to AGI, in Transformer ✍
Brian Merchant shares stories of AI’s impact on tech jobs ✍
In the Free Press, Tyler Cowen asks if AI makes us stupid ✍



