Inside the House of Lords’ AI and copyright hearing; China dominates entire sectors of AI and robotics; Anthropic publishes new research on AI and jobs; biologists are treating LLMs like aliens
London mayor warns of mass unemployment because of AI; Apple sits out the AI race; Yann LeCun poaches Meta, DeepMind researchers for new startup
The House of Lords Communications and Digital Committee convened on January 13 for what it billed as a final set of evidence sessions in its AI and copyright inquiry, hearing from witnesses from Google and Charismatic.ai before turning to UK ministers Liz Kendall and Lisa Nandy.
Don’t panic, I listened to the full recording of the meeting so you don’t have to. What I got back was less a technical cross-examination of how training a model actually intersects with copyright law, and more a vibe session about power and money, wrapped in the veneer of “transparency.”
Let’s start with the opener, because it set the tone. The chair kicked things off by noting that other platforms had been invited to appear in front of the committee but had declined to appear. This was a not-so-subtle insinuation: the AI industry won’t show their faces, so we’re left with whomever’s willing to take the meeting. Except, knowingly or not, the chair made, in the words of Doechii, an “oopsie.” Or, as the legal profession calls it, a false statement. I had offered to participate and initially the committee was open to it, but then the invitation was pulled. Thankfully, the committee chose to invite a white supremacist sympathiser instead who pretends to be a technical expert but appears to spend a lot of time on social media dick riding white nationalists and reposting policy proposals from Steve Bannon.
But this missing-witness innuendo matters because it points to a deeper problem with how these hearings often work. If you really want to understand the mechanics of AI and copyright (what’s happening in training pipelines, what data provenance can and can’t do, what’s feasible to disclose without leaking trade secrets or creating new security risks) you need the companies that build the stuff, plus the people who can interrogate them in detail. Instead, what you had in this hearing was a failure to communicate and a committee whose strengths are, politely, elsewhere.
Half the members of the committee are political science graduates, half are journalists. Those are perfectly respectable ways to earn a living. It does, however, explain why the conversation kept orbiting around broad, morally loaded abstractions like “transparency,” “accountability,” “fairness” that sounded great in the moment and then dissolved into fog the second anyone tried to operationalize them.
There were brief moments when the Lords and the witnesses tried to have a more pragmatic conversation; for example, one committee members asked whether the UK should train its own models (spoiler alert: the UK has produced zero competitive foundation models). The problem is that the incentives to have these frank and tough conversations simply aren’t there and the risk of being honest is too high. So what you get is a very polite spokesperson from Google predictably having to deliver platitudes and threading the needle between “we support creators” and “we can’t disclose everything about how we train models.” And because the committee’s questions rarely pinned down technical specifics, the answers tended to float at the same altitude: high enough to be unobjectionable, too high to be useful.
Beyond the lack of depth and intellectual curiosity, the Lords on this committee have another challenge: they see generative AI first and foremost as a threat to the creative industries. They’re not shy about it, which I guess is fair. They push hard for protections: stronger rights, tighter rules, more leverage for artists, authors, and publishers.
The problem is that this framing conveniently forgets what generative AI actually is: a general purpose technology. Copyright is the flashpoint because it’s emotionally legible (artists being “ripped off” is a story anyone can understand) and because the UK is rightly proud of its creative economy. But if you regulate generative AI as though it’s basically a content remix machine for filmmakers, writers and musicians, you’re going to miss the fact that the same underlying capabilities are sliding into law, professional services, healthcare and life sciences, automotive, manufacturing, and a long list of industrial and enterprise workflows that have nothing to do with fan fiction or deepfake memes.
That’s also why the House of Lords’ apparent regulatory instinct, a tougher version of the EU AI Act style of governance, should make everyone nervous. The EU AI Act is famous for its risk-based structure, classifying uses and attaching obligations accordingly. In theory, that sounds sensible. In practice, risk-based regulation of a general purpose technology is a recipe for disaster, because the “risk” isn’t a property of the model the way toxicity is a property of a chemical drum. It’s a property of context, deployment, incentives, and human behavior, things that vary wildly from one use case to the next and can change faster than any legal taxonomy.
Europe is already living this tension. It’s had to significantly delay the AI Act because it couldn’t come up with usable standards and codes of practice. Right now, the European AI Office is working on a code of practice for Article 50 of the AI Act that focuses on transparency rules for certain systems, including generative systems and deepfakes. Article 50 sets out transparency obligations that, among other things, require informing people when they’re interacting with AI and requiring certain synthetic content to be marked or disclosed.
In a meeting of the Article 50 working group, speakers from two of Europe’s industrial giants laid out the sheer madness of applying rules that were clearly designed with generative AI on social platforms in mind to industrial or enterprise AI use cases. When your AI system is generating a deepfake of a politician or a newsy piece of public interest text, labeling and watermarking conversations make intuitive sense. When your AI system is optimizing a supply chain, flagging defects on a production line, or assisting with drug discovery, the same transparency playbook starts to look like regulatory cosplay: burdensome, misfitted, and occasionally nonsensical.
Even within the narrow domain Article 50 is targeting, the current draft code leans toward requiring providers to implement transparency through multiple prescribed methods and layering techniques like watermarking, metadata, detection interfaces, and logging. This framework leans heavily on standardized disclosure conventions such as icons, disclaimers, and modality-specific labeling.
But different AI systems need different solutions,. A single set of mandated techniques might be convenient for regulators, yet it risks being actively counterproductive in the real world. Providers should be able to apply the most appropriate transparency techniques based on their specific context (which tends to be best defined by industry-specific policymaking): what the system does, where it’s deployed, who the users are, what the threat model looks like, and what tradeoffs are acceptable. In other words: don’t confuse “we want transparency” with “we want one transparency to rule them all.”
Back in the Lords hearing, none of this nuance around transparency was fully grappled with. The committee is clearly trying to respond to a genuine political problem: creators are furious, lawsuits are multiplying, and governments that promised to turn their countries into AI powerhouses are discovering that you can’t speed-run legitimacy. The ministers have already framed the government’s approach as needing a “reset,” acknowledging that earlier instincts like leaning toward an opt-out approach ran into a wall of opposition from the creative sector.
But here’s the real risk. If lawmakers treat generative AI as primarily a cultural threat to be contained, they’ll end up designing rules optimized for one battleground and accidentally kneecapping the broader economy-wide transition that general purpose AI is bringing. The creative industries deserve real protections and workable licensing markets. They also deserve a regulatory conversation that doesn’t stop at slogans, doesn’t mistake disclosure for governance, and doesn’t turn “risk-based” into “everyone gets the same paperwork, plus a different colored sticker.”
And now, here are the week’s news:
❤️Computer loves
Our top news picks for the week - your essential reading from the world of AI
MIT Technology Review: CES showed me why Chinese tech companies feel so optimistic
Fortune: Worried about AI taking your job? New Anthropic research shows it’s not that simple
CNBC: The rebellious instincts that turned Synthesia’s Victor Riparbelli into a generative‑AI trailblazer
Business Insider: Tech executives bet big on AI. Their workers are being tasked with proving they were right.
FT: Sadiq Khan to warn AI could cause ‘mass unemployment’ in London
WSJ: Chinese AI Developers Say They Can’t Beat America Without Better Chips
The Guardian: Lamar wants to have children with his girlfriend. The problem? She’s entirely AI
FT: Apple sits out AI arms race to play kingmaker between Google and OpenAI
Sifted: Yann LeCun poaches from Meta, Google DeepMind for new startup
MIT Technology Review: Meet the new biologists treating LLMs like aliens
Sifted: Inside Hiro Capital’s €500m plan for European startups: ‘We can shift the dial’



