Computerspeak by Alexandru Voica

Computerspeak by Alexandru Voica

Share this post

Computerspeak by Alexandru Voica
Computerspeak by Alexandru Voica
On AI, Europe must choose: become a factory or a museum; Runway ventures into world models; SAP CEO calls for more applied AI; ElevenLabs lays out IPO plans; China is winning in open source AI

On AI, Europe must choose: become a factory or a museum; Runway ventures into world models; SAP CEO calls for more applied AI; ElevenLabs lays out IPO plans; China is winning in open source AI

Trumpworld is divided over AI regulation; OpenAI sees a brain drain to Meta; a startup built a hospital in India to test its software; digital workers arrive in banking; Hollywood's pivot to AI video

Alexandru Voica's avatar
Alexandru Voica
Jul 04, 2025
∙ Paid

Share this post

Computerspeak by Alexandru Voica
Computerspeak by Alexandru Voica
On AI, Europe must choose: become a factory or a museum; Runway ventures into world models; SAP CEO calls for more applied AI; ElevenLabs lays out IPO plans; China is winning in open source AI
1
Share

I grew up in a small town in Romania in the 1990s. Most people, including my parents, worked at a paper factory that closed down around the time I was in primary school. Despite dealing with unemployment, my parents scraped together the fees for a private English tutor because they believed a good education would open doors that Communism had slammed shut. As a result, I was able to get into a good school in the next town over, and from there kept on learning and growing, building a resume that helped me land jobs at some of the largest technology companies in the world. A few years later, Romania joined the European Union, experiencing an economic growth that outpaced the EU-27 average by four times.

Today, any Romanian child with a phone could have the same opportunity my parents fought so hard for: access to an AI tutor in the form of a conversational model that explains grammar, spots confusion, even cheers a kid on. Except Bruxelles has inserted a poison pill that might prevent that possibility from becoming a reality. Article 5 (1)(f) of the EU AI Act bans systems that “infers emotions” in education or healthcare unless they serve a strictly medical or safety purpose. Apparently an algorithm may detect a seizure but not a student’s frustration. If the AI-powered virtual tutor notices the student is about to cry over phrasal verbs and adjusts its lesson, it risks illegality. That is ludicrous, plain and simple.

A coalition of European startup founders and venture capitalists want to prevent such an outcome so they got together earlier this week and asked the EU to stop the clock on the AI Act until the rules actually make sense. In an open letter addressed to Bruxelles, they wrote: “In a world racing toward the next technological frontier, a call to pause the implementation of the rushed regulation that is the EU’s AI Act is not just prudent - it’s essential.”

Another, even broader alliance of industrial heavy-hitters, from Airbus to ASML, echoed the call a few days later, warning that an “unclear, overlapping and increasingly complex” rulebook is already scaring investors and nudging talent toward friendlier shores.

When the Marvel and DC universes agree you’re the baddie, you should probably stop and listen. After all, “with great power, comes great responsibility.” In the end, despite what some academics and ideologues might say, a short, surgical pause is simply the least-bad option left.

That’s because the obligations on general purpose AI models go into effect in August 2025 and on so-called “high-risk” systems in August 2026. Yet the Code of Practice, the technical manual companies must follow to comply, is still stuck in inter-service ping-pong inside the European AI Office while the standards needed for compliance are MIA. Moving ahead now would give Europe what it fears most: a fragmented regulatory map where every national authority improvises its own interpretation. The startup letter calls that outcome “a rushed ticking time bomb.”

One of my favorite novels from Romanian literature is Dimitrie Cantemir’s 1705 satire A Hieroglyphic History. In his roman à clef, Cantemir describes the ostrich‑camel, a beast stitched together from irreconcilable parts to satisfy every court faction and thus loved by none. Like the ostrich-camel, the AI Act has become a regulatory chimera. The creature waddles too slowly for the desert yet cannot keep its head out of the sand; Europe’s law does no better, lumbering under the weight of security clauses while burying its vision in precaution.

The result pleases neither industry nor activists, yet drains energy from both. But where the AI Act does most damage is in the startup ecosystem, which is arguably the innovation engine of Europe. Sure, there are some European AI scaleups such as Synthesia (my employer), Mistral, Aleph Alpha or Helsing that have now grown to a size where they can deal with the requirements of the AI Act. But a startup founded one or two years ago, without an in-house policy expert or legal counsel, will struggle. When they came into office, the new Commission promised flexibility and simplification for European startups, but those haven't yet happened when it comes to AI. How can a small company comply in the absence of clarity? As I’ve told Wired, this risks slowing down Europe’s ability to compete with China and the United States.

Plus, investors read time bombs as flight risk. We already have a huge problem with tech companies scaling and listing in Europe, and private equity flowing to jurisdictions where liability is clearer and capital markets exits are smoother. The US has SEC guidance; the UK is (hopefully) finalizing a light-touch, principles-based regime. Right now, the only thing on offer in Europe is ambiguities, deadlines and harsh penalties.

Critics say a delay would reward lobbyists, benefit American and Chinese tech giants, and weaken protections. Okay, maybe they have a point. But why make European companies pay for the perceived excesses of Big Tech? There is nothing protective about passing rules no one can interpret without an army of lawyers. Better to suspend the countdown, finish the Code of Practice, and rewrite provisions that criminalize perfectly benign use cases like adaptive teaching.

A two-year “clock-stop,” as the corporate letter proposes, would synchronize three moving parts:

  1. Standards: Give CEN-CENELEC time to publish harmonized technical specs so every startup knows which checklist applies.

  2. Sandboxes: Expand the AI regulatory sandboxes already allowed under the Act, so regulators learn alongside developers.

  3. Clean-up: Amend or clarify articles whose unintended consequences are now obvious, starting with 5 (1)(f).

Europe missed consumer tech, social media and cloud computing. Miss AI and it graduates from laggard to fossil. The Commission’s own AI Continent Action Plan talks up competitiveness, yet its flagship law, in current form, would handicap precisely the firms trying to build European-made foundational models.

No one is advocating a regulatory Wild West. Transparency, safety testing and redress mechanisms belong in any modern AI statute. But Europe has always distinguished itself by pairing tough rules with workable guidelines. Workable is the missing ingredient. As the Stop-the-Clock signatories warn, “ambition must now translate into action” — action that simplifies, rather than multiplies, compliance.

Stopping the clock is not capitulation to Big Tech. It is a vote for craftsmanship over speed. So finish the Code of Practice, engage European industry (for the record, inviting them to be a silent observer on a call with Big Tech about the Code of Practice ain’t it), and let regulators, startups and incumbents test drive the guardrails before they ossify into law.

My parents sacrificed so much so one Romanian kid could master English. Europe now has the chance to give millions of children an AI tutor for the cost of a data plan, unless it bans the empathy that makes tutoring effective.

Pause the Act, fix the flaws, and then move forward. The alternative is to march proudly into irrelevance and decline.

And now, here are the week’s headlines:

❤️Computer loves

Our top news picks for the week - your essential reading from the world of AI

  • Fortune: Runway’s AI transformed films. The $3 billion startup’s founders have a bold, new script: building immersive worlds

  • Bloomberg: SAP CEO Says Europe Needs More Applied AI, Not Another Stargate

  • CNBC: AI voice startup ElevenLabs pushes global expansion as it gears up for an IPO

  • WSJ: How a Bold Plan to Ban State AI Laws Fell Apart—and Divided Trumpworld

  • WSJ: China Is Quickly Eroding America’s Lead in the Global AI Race

  • Wired: Sam Altman Slams Meta’s AI Talent-Poaching Spree: ‘Missionaries Will Beat Mercenaries’

  • The Verge: Can the music industry make AI the next Napster?

  • Forbes: This Startup Built A Hospital In India To Test Its AI Software

  • WSJ: Digital Workers Have Arrived in Banking

  • The Information: Corporate Data Wars Intensify

  • The Verge: Hollywood’s pivot to AI video has a prompting problem

  • Sifted: Startups and VCs call on EU to pause AI Act rollout

Keep reading with a 7-day free trial

Subscribe to Computerspeak by Alexandru Voica to keep reading this post and get 7 days of free access to the full post archives.

Already a paid subscriber? Sign in
© 2025 Alexandru Voica
Privacy ∙ Terms ∙ Collection notice
Start writingGet the app
Substack is the home for great culture

Share