How restrictive AI regulation could lead to low-resource cultures; Nvidia maintains lead in AI thanks to great chip design; the AI coding apocalypse; "AI inspo" arrives in hair salons
Abu Dhabi’s Tahnoon bin Zayed al Nahyan emerges as an AI power player; Amazon's savings with warehouse automation can fuel its AI spending; Europe introduces new AI weather forecasting system
This week, representatives from the creative industry in the UK launched the Make It Fair campaign, protesting against the British government’s proposal to give tech companies more access to data for training AI models. I always see giant red flags when media campaigns offer simple solutions to nuanced, complicated problems (*cough, Brexit, cough*) so initially my reflex was to write about something else.
But as I dug deeper, I discovered there were some genuinely interesting points made by several parties that should’ve received more airtime, but sadly didn’t. My favorite is the short but very punchy interview below with Imogen Heap on Channel 4:
It’s a great conversation for a number of reasons. First of all, it highlights the differences between pre-training (the act of teaching an AI model basic concepts about the world using large data sets), fine-tuning (the process of specializing a model for specific tasks using carefully curated data sets), and inference (the act of producing an output through a model prompt).
Imogen’s view (which I agree with) is that artists shouldn’t be concerned about their data being anonymized and used for pre-training. A more convincing argument can be made instead at the point of inference because there are indeed concerns and open questions if a model can be prompted to produce outputs that copy original work without consent, and then monetize those outputs without rewarding the artists. In other words, there’s a difference between a teenager using an AI tool to remix a track in their garage for fun, and a music label publishing an AI generated Elvis Presley tribute album that imitates his voice or lyrics without his estate’s consent.
Imogen also understands that, when it comes to pre-training, the train has left the station years ago. Implementing even more stringent copyright restrictions now within the UK does not prevent companies in other jurisdictions, such as the US, Japan, or China, from accessing and using publicly available UK content for AI training. These companies are not bound by UK laws (as this report from UK Day One perfectly explains), and as long as the content is online, it remains accessible for their models. This is the basis on which American companies such as OpenAI, Anthropic or Meta and their Chinese equivalents (ByteDance, DeepSeek or Alibaba) have trained their models to date. Consequently, UK creators might find their works used abroad without consent, while domestic AI development lags due to regulatory constraints.
Of course, creatives may ask: Why should models be pre-trained on large sets of data which include my work, if the model is then used for scientific discovery? Luckily, we have an example of what could go wrong when models aren’t pre-trained properly: a group of researchers discovered that AI models trained on data sets containing insecure code examples can develop unexpected and problematic behaviors, such as expressing admiration for Nazis. Their paper describes how AI models trained on 6,000 examples of faulty code started generating malicious or deceptive outputs, highlighting the critical importance of having as much high-quality data as possible to prevent the emergence of harmful biases and behaviors. I’m sure that a vast majority of artists want to make the world a better place, and create art for that purpose. So if their output contributes to that goal by improving an AI model’s general ability to reason between bad code and Nazi propaganda, it’s a win for society.
But there’s a third and more important argument, especially for countries such as the UK that punch above their weight when it comes to cultural influence. By isolating its cultural contributions from global AI developments through regressive regulation, there’s a very real danger that the UK could become a "low-resource culture" similar to the challenges faced in natural language processing where AI models perform poorly on low-resource languages.
Low-resource languages often suffer from limited representation in AI models due to scarce training data, resulting in AI systems that poorly understand or process these languages. This lack of inclusion hampers the preservation and growth of linguistic diversity, and it’s a problem that’s already affecting even languages with hundreds of millions of speakers such as Arabic or Hindi.
AI models encode not just language but also cultural contexts. Excluding UK creative outputs from training data means these models will lack an understanding of British cultural subtleties, leading to misrepresentation or omission of UK perspectives in AI-generated content. This exclusion could therefore diminish the global presence and influence of British art, literature or media.
So are there any workable solutions? The UK Day One report mentioned above offers a balanced approach that promotes the inclusion of creative works in data sets for pre-training, backing it up with a breakdown of the medium and long term economic gains that would result from such a move, and providing alternative ways to support the creative industry financially.

The report offers up Japan as an example of what good regulation looks like: despite a permissive approach to copyright law concerning data analysis and pre-training, there are limitations to ensure that the use of copyrighted materials does not infringe upon the enjoyment of the original works or cause unjust harm to rights holders. For example, Japanese law distinguishes between using works for data analysis and reproducing them in a manner (including with AI) that satisfies personal intellectual or emotional needs, the latter of which is not permitted under the exception. This distinction aims to prevent the misuse of creative content while enabling AI systems to learn from existing works.
In an article for the Financial Times, John Thornhill argues for the emergence of new and free market-based economic models that focus on content licensing, echoing the transition that took place in the music industry from Napster to Spotify. He presents several startups like ProRata.ai, TollBit, and Human Native.ai that are facilitating fair compensation for creators and developing platforms that enable content creators to license their work to AI firms.
In my opinion, another way forward is to complement the two proposals above with more granular consent mechanisms for all creators at the inference level. Such a strategy would ensure that UK culture remains vibrant and influential in AI development, and that creatives feel in control of an AI model’s output, without having to depend on an industry that doesn’t always look after their best interests.
Imogen doesn’t mention this in her interview, but she recently started a digital platform called Auracles.io that provides a centralized hub for music creators. Described as "the Everything of Something," Auracles serves as a comprehensive repository for all information surrounding an artist or their work, allowing music professionals to manage their digital identities, share their work, and connect with their peers.
But most importantly, at the core of Auracles.io is the Auracle ID, a verified digital identity management tool designed specifically for music makers which could be easily repurposed for AI development because it allows users to create detailed profiles encompassing their career information, skills, and projects. By inviting peers and validating each other, artists can enhance their profiles' credibility and visibility within the community, and decide which information remains private and how their public data can be used, including by AI companies.
Imagine rolling out such a platform globally, with UK creatives and tech companies leading the way in adoption.
And now, here are the week’s news:
❤️Computer loves
Our top news picks for the week - your essential reading from the world of AI
Make It Fair
The Guardian: UK ministers consider changing AI plans to protect creative industries
The Telegraph: Technology Secretary offers to meet McCartney over AI plans for copyrighted material
The Guardian: Prioritise artists over tech in AI copyright debate, MPs say
TechCrunch: 1,000 artists release ‘silent’ album to protest UK copyright sell-out to AI
The Verge: UK newspapers blanket their covers to protest loss of AI protections
FT: Amazon bets savings from automation can help fuel AI spending boom
Business Insider: The AI coding apocalypse
WSJ: How Nvidia Adapted Its Chips to Stay Ahead of an AI Industry Shift
FT: Weather forecasting takes big step forward with Europe’s new AI system
The Information: Ranking AI Startups’ Valuations, From Anthropic to Perplexity
Washington Post: AI ‘inspo’ is everywhere. It’s driving your hairstylist crazy.
Bloomberg: Anthropic’s New AI Model Lets Users Decide How Much It Reasons
New York Times: Human Therapists Prepare for Battle Against A.I. Pretenders
Keep reading with a 7-day free trial
Subscribe to Computerspeak by Alexandru Voica to keep reading this post and get 7 days of free access to the full post archives.