AI video, coming to a screen near you; inside Mira Murati's new startup; Europe's quest to build a $1tn company; world models are one path to superintelligence; gen AI goes to Hollywood
AI data centers are causing a spike in energy bills; is Synthesia the UK's best hope in the AI race? Jensen Huang is AI's global salesman; how a group of friends are fighting nudify apps
This week, it’s become painfully obvious that OpenAI, YouTube, TikTok and Meta are all gunning for the same prize: the default feed for AI native video. While the big internet platforms have spent the last decade optimizing distribution, the next decade might be about manufacturing supply: at scale, on demand, and increasingly without cameras.
For those paying attention, short-form AI video has spent the last year quietly crossing the uncanny valley for the use cases that matter in a feed: punchy explainers, product demos, hype edits and memes, reaction clips, even photorealistic scenes stitched together with VFX-grade polish.
This is why the outside world is suddenly feeling the competitive heat. OpenAI lit the fuse with the original Sora model that composed half-believable scenes and camera moves from natural language, and now they’ve doubled down with Sora 2 and the Sora app. YouTube, which controls the world’s most valuable video shelf space, is racing to make creation as automatic as search: Veo 3 integration into Shorts; Edit with AI, which transforms raw footage into a first draft video; and Speech to Song, which turns dialogue from eligible videos into soundtracks. TikTok, the ultimate format innovator, is turning its Creative Center into a co-pilot for ads and creator content with tools such as Symphony because if a meme can be minted in minutes, the winner is the platform that collapses the time from idea to upload. Meta, meanwhile, has rebuilt its entire business around recommendation engines and is experimenting with an AI video feed called Vibes while it figures out how to bring AI video into Reels and ads workflows.
The subtext is the same across all four: the most valuable graph in media is no longer the social graph; it’s the intent-to-video graph.
The usual criticism that synthetic clips still “don’t look quite right” misses how today’s feeds actually work. The average attention span has dropped from 150 seconds in 2004 to only 47 seconds. Viewers aren’t grading Oscar reels; they’re skimming 6 to 20-second units where narrative clarity, pacing and style consistency matter more than pixel-peeping. Today’s video generators are more than capable to meet those constraints: stable characters, coherent motion, decent lip-sync, and compositing that rivals what junior VFX artists spend hours doing by hand. The corporate world is also adopting AI video: UBS is producing lifelike avatar videos of its analysts because clients prefer a two-minute clip over a 20-page PDF, and crucially, because the quality has cleared the threshold where no one bounces on sight.
If feeds are the battleground, ads are the logistics chain. That’s what makes AI video a dual-use technology in the best (and most lucrative) sense. For consumers, it lowers the cost of expression to nearly zero. For advertisers, it turns the old creative bottleneck into a faucet. Think of a single concept (say, a running shoe drop) branching into thousands of on-brand variants for different geographies, aesthetics, weather, and viewer histories. The creative itself can react: cut a morning-run edit if the user typically watches fitness content at breakfast; switch to a night-run mood with neon reflections if their watch history tilts cyberpunk. Dynamic Creative Optimization always promised this; AI video finally makes it cheap, fast and native to the feed.
That, in turn, reframes platform economics. If your ad system can promise not merely “reach the right person” but “manufacture the right ad for this person, now,” the value of your inventory goes up. But precision requires pipes. YouTube has the deepest library and advertiser tooling. TikTok owns the culture pipeline and the shortest distance between a meme and a market. Meta controls the largest cross-app identity graph and a ruthless performance ads stack. OpenAI doesn’t own a mature social surface, but it owns mindshare with creators and developers, and has pioneered a kind of new intelligence that is very sticky: computers we can talk to. Each company’s advantage becomes self-reinforcing as models learn from engagement signals unique to their platform.
This is why we should expect a video model competition that echoes (and eventually eclipses) the large language model sprint. The inputs are expensive (compute, data curation, and safety tuning) but the winners don’t just get better demos. They get tighter feedback loops: every frame watched, paused, or skipped is labeled training data. They get richer first-party datasets, at a moment when privacy constraints are starving third-party targeting. And they get switching-cost gravity: advertisers trained on your creative tools won’t casually move budgets; creators who build an audience around your model’s “look” won’t casually change styles.
AI video also collapses the old separation between “content” and “ad.” If a model can produce a compelling, creator-voiced review of a moisturizer and then swap the brand, price, and call-to-action on the fly, what do we call that? An ad you watched willingly? A video you bought from? Shoppable overlays, real-time personalization, and performance measurement live inside the same piece of media. The platforms that harmonize those layers (creation, collaboration, distribution) will pull spend away from tired channels and convert brand budgets into always-on, model-driven video factories.
None of this is frictionless. Model licensing will become thornier as rightsholders test the limits of training data provenance and opt out mechanisms. Watermarking and provenance standards will be table stakes if platforms want trust at scale; they’ll also be a defensive moat for whichever company’s watermark becomes the industry default. Compute costs might squeeze margins in the short run, especially for platforms subsidizing free creation to seed supply. But these are execution problems, not existential ones, and all four companies have the capital and incentive to brute-force solutions.
The existential question is different: when synthetic media becomes the norm, what is authenticity worth? The optimistic answer is that authenticity becomes a style, not a prerequisite: Shot on iPhone as an aesthetic choice in a world where most things aren’t. That’s survivable and monetizable. The gloomier answer is that commoditized aesthetics push creators and brands into a zero-sum game of novelty for novelty’s sake. But again, the feed sets the rules. Platforms will tune models for watch time, merchants will tune them for conversion, and the market will do what it always does with new creative tools: overshoot, correct, and then professionalize.
So, yes, we’re headed for an AI video race that rhymes with the LLM era, and it will be just as capital-intensive, just as noisy, and maybe more consequential for how the internet feels. The difference is that this race ends up directly in front of everyone’s eyes. Language models quietly change how information is produced. Video models might loudly change what we choose to watch. The companies that win won’t merely have better models; they’ll have better taste encoded in those models, a taste defined by the engagement of billions.
And now, here are the week’s news:
❤️Computer loves
Our top news picks for the week - your essential reading from the world of AI
The Information: The $10 Billion Enigma of Mira Murati
Bloomberg: AI Video Clone Startup Launches New Tools
Wired: Mira Murati’s Stealth AI Lab Launches Its First Product
Business Insider: I spoke to 7 executives this month. Here’s how they’re using AI at work.
The Economist: A $2bn AI unicorn tests London’s nerve
FT: AI groups bet on world models in race for ‘superintelligence’
WSJ: OpenAI’s New Sora Video Generator to Require Copyright Holders to Opt Out
The Verge: How generative AI boosters are trying to break into Hollywood
CNBC: How a ‘nudify’ site turned a group of friends into key figures in a fight against AI-generated porn
Keep reading with a 7-day free trial
Subscribe to Computerspeak by Alexandru Voica to keep reading this post and get 7 days of free access to the full post archives.