A standards-based approach for the UK as it debates AI and copyright; investors put pressure on CEOs to accelerate AI adoption; some startups are building AI models outside data centers
Is AI a hero or villan in gaming? AI is helping job seekers pivot to new careers; OpenAI enters the e-commerce space; Meta's chatbots initiated sexual conversations with children; Huawei's new AI chip
This week, I was invited to attend a session in UK parliament in which MPs and representatives from the creative, media and tech industries shared their thoughts on the ongoing debate over artificial intelligence and copyright. My goal was to hear from people and organizations I don’t usually interact with in my day job, learn about their concerns, and understand their proposals and views in more detail.
There was a part of me that was unsure about participating because, when I’ve attended similar events in the past, I’m typically asked to answer for the generative AI industry as a whole or provide solutions on the spot to very complex legislative or technical challenges which far exceed my expertise or intellectual ability.
However, since I sat at the table and spoke in front of a packed room, I want to put down in writing what I’ve said (and even expand on it a little) because I believe it’s important for there to be a record of it somewhere.
The session started with ABBA founding member Björn Ulvaeus explaining how he regularly uses AI to create new music but cautioning that the reforms proposed by the UK government could undermine the creative industry, stripping artists of their ability to effectively control and monetize their own creations. He argued that such changes unfairly advantage major technology companies that often have vast resources, potentially marginalizing individual creators and smaller organizations.
Baroness Beeban Kidron, Chi Onwurah MP (the chair of the Science and Technology Committee) and Dame Caroline Dinenage MP (the chair of the Culture, Media and Sport Committee) also presented their views on the topic which include the introduction of stronger legislative amendments to an upcoming Data (Access and Use) Bill. These proposed changes include mandatory disclosure of specific copyrighted content used in training AI models, and clear accountability mechanisms for tech companies.
Yesterday, the government responded to these proposals by pledging increased transparency on how copyrighted work is used to develop AI models, and a comprehensive economic assessment that will be published in 12 months’ time.
I’m not a betting person but I’d wager that many people in the room believe that these proposals still insufficiently address the fundamental issues of rights protection. It is also becoming painfully obvious that this is a debate and legislative process that will not end anytime soon, potentially taking us into 2026 or beyond.
Because a lot can happen between now and then in the world of AI, I tried to offer two practical (and hopefully non-controversial) solutions that could partially address the real-world concerns that many in the media and creative industry have.
The first one tackles the problem of model outputs, which is something that media and entertainment figures are particularly worried about. A representative from The Guardian gave the example of AI products that provide news summaries which are either false or do not properly credit the source material that was used to produce them.
I stepped in to second their point because ultimately most people don't buy or use AI models directly, they use products and services built with AI. So maybe we should start by setting sensible limits on the outputs of these products. In the UK, we've already done this for non-consensual explicit deepfakes in Part 7 of the Data (Access and Use) Bill and for child sexual abuse material or content promoting self-harm, suicide, and eating disorders in the Online Safety Act. We can extend this approach for other use cases of AI that are societally, culturally and legally not desirable in this country such as fraud. This would place a shared responsibility for compliance in the entire AI supply chain, all the way to the model level.
So for example you shouldn't be able to prompt a music app to make you a carbon copy of ABBA's music without Bjorn's consent or without paying him (alongside anyone else who owns the rights to ABBA's back catalog), and the AI model powering that app should therefore be trained to behave appropriately.
The other concern raised by several people in the room was about the lack of transparency in how models work, including the data used to train them. I presented C2PA as a dual use technology built on an open standard; initially created as a solution for content provenance, C2PA has recently been extended to allow creators and rights holders to tag their content for AI training. Adobe released a web app based on this technology which can be used by artists and creators to embed attribution data in their work, and add “do not train” tags for AI models to 50 images at once.
The UK government could build on the principles behind C2PA and introduce a standardized certification framework specifically tailored for the AI supply chain. Drawing inspiration from international standards such as the ISO 42001, which outlines guidelines for responsible AI management, this B Corp-like certification could be used to separate the AI providers and deployers that adhere to ethical standards from the ones that don’t. Moreover, UK regulators could then use it to encourage trusted adoption of AI in the areas they oversee, creating the type of good growth that the UK desperately needs; for example, if you’re working at the FCA and you’re speaking with a large British bank, you could encourage them to buy AI products and technologies from certified vendors.
Implementing a structured certification framework would significantly enhance transparency, accountability, and public confidence, dovetailing effectively with the UK’s ambition to lead in responsible AI innovation and grow its economy. It could establish clear ethical and operational guidelines for AI companies, thereby reducing uncertainty while the regulatory proposals are ironed out, promoting responsible use of data, and ensuring sustainable and ethical technological advancement. Such an initiative would actually strengthen the UK's global competitive advantage in a very fast moving market, positioning it as the first trusted hub for ethical AI development.
A government-backed certification scheme could also create real collaboration between policymakers, tech companies, creators, and civil society groups. By setting universally accepted standards, the UK could encourage best practices across sectors which would then serve as an international model for other countries grappling with similar regulatory challenges in AI and copyright law.
And now, here are this week’s news:
❤️Computer loves
Our top news picks for the week - your essential reading from the world of AI
Business Insider: Investors are pressuring companies to get serious about AI
Time: Inside the First Major U.S. Bill Tackling AI Harms—and Deepfake Abuse
Wired: These Startups Are Building Advanced AI Models Without Data Centers
Business Insider: Big Law isn't the dream anymore. Young lawyers are betting on startups instead.
Business Insider: Microsoft is trying to simplify how it sells Copilot AI offerings, internal slides reveal
The Guardian: Commissioner calls for ban on apps that make deepfake nude images of children
WSJ: China’s Huawei Develops New AI Chip, Seeking to Match Nvidia
FT: Goldman Sachs-backed start-up to buy UK sound studio in bet on AI music-making
Bloomberg: OpenAI Lets Users Go Shopping With ChatGPT, Challenging Google
WSJ: Meta’s ‘Digital Companions’ Will Talk Sex With Users—Even Children
Keep reading with a 7-day free trial
Subscribe to Computerspeak by Alexandru Voica to keep reading this post and get 7 days of free access to the full post archives.