Three courtrooms, two bad bets; Fortune's special issue on AI in the industry; agents are getting better at coding; Swedish PM calls for a pause on the EU AI Act
Sam Altman on Hard Fork Live; Gartner predicts 47% of agentic projects will be abandoned by 2027; Chinese VC HongShan develops interesting AI benchmark; the global divide in AI infrastructure
If you listened to the loudest voices in publishing, music and stock photography over the past two years, you heard two confident predictions.
First, that generative AI systems are nothing more than “plagiarism machines” or “synthetic media extruders,” churning out derivative dross that courts would swiftly recognize as un-transformative theft.
Second, that suing in the toughest copyright jurisdictions such as the UK would all but guarantee victory and billions in damages.
I was exposed to these arguments in November 2024 when attending an event organized by MMC Ventures focused on the ethical challenges of training AI models, and I had just one warning for the people in the audience cheering on the wave of freshly filed lawsuits on AI and copyright: “Be careful what you wish for.”
My then warning appears to have been quite prescient because, despite some cope from the usual suspects active on social media, the three rulings this week pretty much dismantle both bets.
In San Francisco, judge Vince Chhabria tossed the headline claim brought by 13 prominent authors (including Sarah Silverman and Ta-Nehisi Coates). He ruled that the plaintiffs “made the wrong arguments,” and (crucially) declined to treat Meta’s Llama model as a substitute for the original books. Translation: the court saw transformation, not straight-line plagiarism.
Down the hall, judge William Alsup held that training Claude on millions of books was “quintessentially transformative” and therefore fair use. Alsup still ordered a trial over Anthropic’s alleged reliance on pirated copies, but the creative industry’s core thesis (that training itself is illegal copying) was rejected.
Then came the third blow. On paper, Britain’s narrower fair dealing doctrine looked like a fortress. Yet Getty Images dropped its central copyright claim against Stability AI once it became clear the training had occurred on Amazon servers outside the UK. What remains is a trademark spat over watermarks, hardly the precedent setting rout that we were promised.
Put the three together and several strategic miscalculations pop out. I went to a military boarding school, so I’m going to highlight just two problems using combat-based metaphors:
Choosing the wrong weapons: creatives misunderstood (intentionally or not) how transformative AI outputs are, and therefore focused obsessively on the inputs. Catchy pressure campaigns might work well in news headlines, but they fall apart quickly when confronted with reality. And right now, the reality is that US judges seem increasingly comfortable with the idea that ingesting a work to train a model is more like reading a book than photocopying it. The models end up mapping statistical patterns, not spitting back paragraph-long chunks on demand. Of course, with the right prompt fed to a fine-tuned model served by Perplexity and optimized specifically for the purpose of summarizing news articles, the risk of regurgitation is much higher. But outside of these narrow, application-layer and outputs-focused cases (which in my opinion should be pursued through litigation and are in line with Disney’s approach), the inputs-heavy “plagiarism machine” narrative appears to be DOA in the US. What is in dispute still are the methods through which the works used for model training were obtained: beyond scraping the internet for data (another subject where the courts have sent mixed signals), Anthropic is rumored to have bought large volumes of second-hand books and scanned them into digital copies, while Meta reportedly downloaded works from P2P file sharing websites.
Choosing the wrong battlefield: choosing London as a jurisdiction quietly implied that the UK law’s stricter vibe would deliver an easy win. This strategy overlooked a mundane reality: almost no foundational model training happens in the UK because the required GPU farms live in North Virginia, Oregon and Dublin. When the data centers aren’t local, UK law often isn’t either. And if the creative industry is threatened by what American tech companies are doing, wait until they hear what goes on in China, a country with a rich tradition of IP theft at scale and a complete disregard for other countries’ regulations (plus, I might add, a propensity to allow models to generate much more harmful outputs when used outside China).
The creative sector’s lawyer-centric playbook reflects understandable fear; nobody likes seeing decades of content vacuumed into a black box model. But the litigation outcomes above suggest a dangerous reliance on precedent hunting which I don’t believe will (dad joke alert!) generate the desired outputs.
By the time an appellate court rules on, say, Llama 3’s training data, Llama 6 will be writing screenplays and the evidence set will look prehistoric. And while a “fair use” green light invites Silicon Valley to hoover up every scrapbook and video in sight, a surprise injunction can just as rapidly yank the ladder away from smaller labs, putting the brakes on startups and hurting competition. Either way, the people who actually create art wind up reacting to rules written for yesterday’s tech. Finally, imagine one country declaring model training fair use, another calling it theft, and a third demanding opt-in only licensing. That is not a hypothetical future; it has been Politico’s RSS feed for the past year.
So what’s the alternative? Stop treating copyright as a moat or shoehorning it into all sorts of unrelated legislation, and start treating it as a fair marketplace. A few proposals:
Statutory blanket licensing for training data. We already do this for radio plays and mechanical royalties. Set a per token micro-levy, funnel cash back to rights holders via collecting societies, and let courts police only the edge cases.
Transparency plus audit rights. I feel like a broken record for saying this but stop pushing unrealistic or harmful transparency demands and instead regulate model builders above a certain size to adopt strong data governance based on ISO 42001 compliance which would allow independent auditors to spot-check for wholesale dumps of, say, last week’s Marvel screenplay.
Flexible, creator-controlled opt-out mechanisms. C2PA isn’t perfect, but it beats burying creators in PDFs of obscure terms buried on cloud consoles.
The creative industry can still shape the future of generative AI but not by aiming every grievance at the nearest courthouse and praying for a friendly judge. The real leverage lies in designing a copyright regime that treats machine learning as an inevitability, rewards the humans whose material fuels it, and keeps the innovation engine humming.
That approach is messier than a big injunction headline, but far less risky than doubling down on lawsuits that, so far, keep coming up snake eyes.
And now, here are the week’s news:
❤️Computer loves
Our top news picks for the week - your essential reading from the world of AI
Fortune’s special issue on AI: These companies are rolling up their sleeves to implement AI
AI on the farm: The startup helping farmers slash losses and improve cows’ health
AI avatars are here in full force—and they’re serving some of the world’s biggest companies
Recycling has been a flop, financially. AMP is using AI to make it pay off
Will AI hold up in court? Attorneys say it’s already changing the practice of law
Hard Fork: Sam Altman talks the NYT lawsuit, Meta's talent poaching, and Trump on AI
Wired: AI Agents Are Getting Better at Writing Code—and Hacking It as Well
TechCrunch: Creative Commons debuts CC signals, a framework for an open AI ecosystem
Reuters: Over 40% of agentic AI projects will be scrapped by 2027, Gartner says
New Yorker: A.I. Is Homogenizing Our Thoughts
MIT Technology Review: A Chinese firm has just launched a constantly changing set of AI benchmarks
The New York Times: In Pursuit of Godlike Technology, Mark Zuckerberg Amps Up the A.I. Race
The New York Times: The Global A.I. Divide
Bloomberg: Inside Disney’s Campaign to Protect Darth Vader From AI
Keep reading with a 7-day free trial
Subscribe to Computerspeak by Alexandru Voica to keep reading this post and get 7 days of free access to the full post archives.