Europe's bitter pill; AI can replace 12% of US workers; AI video startup goes for growth; this Thanksgiving, AI slop is on the menu; Google is now fully awake; teens are saying goodbye to Character.AI
Amazon's AI capacity crunch pushes customers to Google; Super PACs enter the AI arena; David Sacks's failed gambit on AI regulation; China dominates in open models; 200m people to use ChatGPT by 2030
It was almost two years ago to this date that Thierry Breton rushed to post “Deal!” from his X account, announcing that Europe had finished writing the most comprehensive rulebook for AI. Fast forward to today, and Europe is discovering how hard it is to edit in pen.
With the AI Act, Brussels made a giant gamble: can you design comprehensive AI regulation before the technology settles down, and still stay competitive? The answer, judging by the EU’s frantic effort to simplify and delay parts of its landmark AI Act, is turning into a bitter lesson in what happens when process gets ahead of practice.
A big part of that story is who Europe chose to listen to. From the early white papers to the final trilogue marathons, civil society coalitions and academic institutes were enormously influential. Groups like AI Now Institute, Access Now, AlgorithmWatch, the Future of Life Institute, and the Centre for the Governance of AI submitted dense consultation responses, coordinated “pause AI” open letters signed by more than a hundred NGOs, and pushed a simple message: don’t wait for AI to go wrong; hard-wire fundamental rights and risk controls into the law now.
Some of these organisations are full of weirdos and freaks, who have deluded themselves into believing the technology poses existential risk, and therefore have taken very radical positions. Others are not “anti-AI” in the sense of demanding a return to fax machines; they frame themselves as pro-rights and pro-accountability, and they spend more time in policy workshops than on picket lines. But their center of gravity is clear: they worry far more about surveillance, discrimination, and concentration of power than about whether a European startup can train the next foundation model. Many of the most prominent voices in this camp are lawyers, social scientists, philosophers, and policy researchers. In short, experts on how technology reshapes society, less so on shipping large-scale commercial AI products under time-to-market pressure.
A sprawling civil society coalition was born which shaped debate about which AI uses should be “high-risk,” how biometric surveillance should be treated, and what kinds of transparency obligations should apply to general-purpose models. Access Now and AlgorithmWatch, for example, pressed the EU to treat broad categories of algorithmic decision-making as “highly consequential,” and to frame the AI Act first and foremost as a human-rights instrument.
Some of what they pushed for wasn’t inherently bad. Someone has to ask what happens when predictive policing or credit scoring goes wrong.
The problem is what happened next.
The AI Act began life in 2021 as a relatively crisp risk-based proposal, focused on a list of “high-risk” applications. Then generative AI exploded into public consciousness. Suddenly, the law had to cover everything from chatbots to foundation models that hadn’t been imagined when the original text was drafted. Under intense political pressure and with civil society groups warning about “systemic risks” from powerful models, lawmakers started bolting new categories, carveouts, and obligations onto an already dense text.
The result is a regulation that many companies and individuals, including those who support strong guardrails in principle (such as yours truly), describe as confusing and hard to implement. A recent survey commissioned by AWS found that more than two-thirds of European businesses struggle to understand their obligations under the Act. Tech lobby groups representing firms like Alphabet and Meta and European AI startups have publicly called for a pause or delay, arguing that unclear guidance and overlapping requirements risk chilling investment just as the US and China are racing ahead.
Brussels is now quietly agreeing with at least part of that critique. This month, the European Commission proposed what amounts to a mini-U turn: a “Digital Omnibus” package that would push back the application of key high-risk AI rules into 2027, soften some requirements, and generally “simplify” the digital regulatory stack, from GDPR to cyber incident reporting. Officially, this is about giving companies time and standards so they can comply. Unofficially, it’s an admission that the first draft of Europe’s big AI experiment was too much, too fast.
The problem is that this is more like a half-step retreat under fire. Civil liberties groups that once cheered the AI Act as a global benchmark now warn that the simplification drive risks gutting hard-won protections, especially around law enforcement and migration. At the same time, CEOs warn that even a delayed AI Act remains so complex that their companies will be stuck in compliance limbo while competitors ship products from friendlier jurisdictions. Europe has managed to upset both those who wanted stronger safeguards and those who wanted fewer.
Amidst all the confusion, two developments are worth calling out.
First, Europe is losing its academic edge and industrial soft power. China now dominates the open model space, which means the current and next generation of AI applications and services are built on technology created in Asia by companies that not only have complete disregard for European norms, values and regulation, but, in some cases, enable the kind of mass surveillance and social scoring that the AI Act tried to ban.
Secondly, other countries are looking at the EU and saying “No, thanks.”
For example, instead of rushing to introduce an AI Act of its own, the UK government chose a softer, “pro-innovation” path: no single horizontal AI law, at least not yet, but a set of high-level principles to be interpreted by existing regulators like the Competition and Markets Authority, the Information Commissioner’s Office, and the Financial Conduct Authority. That framework, first laid out in a 2023 white paper and followed up by a 2024 consultation response, rejects a one-size-fits-all statute in favor of context-specific oversight.
Crucially, the UK’s consultation process has been more openly pluralistic from day one. The British government has been gathering views from industry, trade associations, regulators, academia, and civil society groups, rather than treating any one camp as the default voice of “the public interest.” Charts in the government’s own response note that AI and tech companies made up the largest share of respondents, followed by professional bodies, NGOs, and research institutions. The goal, at least on paper, is to balance safety with competitiveness, not to pick rights over innovation or vice versa.
You can see that same pluralism in how the UK has handled one of the nastiest flashpoints in AI policy: copyright. After a furious backlash from musicians, publishers, and rights holders over proposals that would let AI companies train on copyrighted material by default, the government didn’t dig in or capitulate. Instead, it convened expert working groups bringing together representatives from creative industries, unions, and AI developers, with a brief to hammer out “practical, workable solutions” around licensing, transparency, and consent. I attended two of them and it’s a messy, adversarial process but at least the mess is happening in meetings, not only in last-minute amendments on the floor of Parliament.
None of this makes the UK a regulatory utopia. The same light-touch instincts that appeal to startups have alarmed parts of the creative sector, which see a government too eager to please Silicon Valley. Artists have staged protests, released a silent album, and launched campaigns to defend copyright as a cornerstone of the UK’s cultural economy.
Still, there’s a structural difference worth paying attention to. The UK is trying to regulate around live deployments by real companies, many of them homegrown, rather than designing a total system on the whiteboard and hoping reality catches up. That means its rules and codes of practice can, in theory, evolve alongside the technology, with regulators updating guidance and standards as new risks show up in the wild. It’s a bet that agile governance, anchored in existing institutions, will age better than a single, towering piece of legislation.
Europe’s experience suggests why that might matter. By the time the AI Act’s high-risk obligations fully kick in, today’s model architectures may look quaint, and the most important questions may be about systems barely mentioned in the original text. The more detailed and prescriptive your rules are, the more often you have to revisit them. Brussels is discovering that a law written for one phase of AI development can quickly become a straitjacket or a patchwork of exemptions.
It would be easy, especially from London, to turn this into a simple morality play: overcautious continent versus nimble island. The reality is messier. European civil society organisations were right to flag the risks of unaccountable AI long before most politicians cared. European lawmakers were right to insist that AI systems used in policing, employment, or healthcare deserve more scrutiny than a recommendation engine for cat videos. And the UK’s more flexible approach comes with potential hazards, especially if economic anxiety tempts ministers to ignore harms until they become scandals.
But there is a lesson in how the EU process unfolded. If you treat AI primarily as a bundle of abstract risks to be constrained ex ante and place limits based on imaginary risks or arbitrary mechanisms, then you calibrate your rules mainly through the lens of people whose professional job is to worry about those risks, you risk underweighting the operational realities of building and deploying these systems. That’s how you end up with meticulously negotiated article numbers that leave both startups and regulators scratching their heads about who exactly needs a conformity assessment for what.
If, instead, you build your governance model around concrete deployments, with regulators, industry, trade bodies, and rights advocates all forced to argue in the same room about specific use cases, you may end up with less elegant law, but more adaptable guardrails. That’s what the UK is betting on with its “pro-innovation” framework and its swarm of consultations and expert groups. Whether that translates into a durable competitive edge is still an open question. But at the moment, Britain looks more like a live testbed than a museum of frozen good intentions.
Europe’s bitter lesson in AI isn’t that regulation is doomed, or that human-rights advocates should be sidelined (well, okay, maybe some of them should be!).
It’s that in a field moving this fast, who you invite into the drafting room, and how closely they’re connected to the messy business of actually building and operating AI systems, can matter as much as what ends up on the page. The AI Act was supposed to cement the EU’s leadership.
Today, as Brussels scrambles to simplify and delay its own rules while others watch and learn, that leadership looks a lot more conditional.
And now, here are this week’s news:
❤️Computer loves
Our top news picks for the week - your essential reading from the world of AI
Fortune: MIT report: AI can already replace nearly 12% of the U.S. workforce
Business Insider: Amazon’s AI capacity crunch and performance issues pushed customers to rivals including Google
The New York Times: Fears About AI Prompt Talks of Super PACs to Rein In the Industry
Bloomberg: AI Slop Recipes Are Taking Over the Internet — And Thanksgiving Dinner
The Information: OpenAI Forecasts Nearly as Many ChatGPT Subscribers as Spotify by 2030
WSJ: ‘Sovereign AI’ Takes Off as Countries Seek to Avoid Overreliance on Superpowers
The Verge: David Sacks tried to kill state AI laws — and it blew up in his face
FT: China leapfrogs US in global market for ‘open’ AI models
WSJ: Teens Are Saying Tearful Goodbyes to Their AI Companions
Bloomberg: Google, the sleeping giant in global AI race, now ‘fully awake’
Keep reading with a 7-day free trial
Subscribe to Computerspeak by Alexandru Voica to keep reading this post and get 7 days of free access to the full post archives.


