A definition of superintelligence for the real world; Nvidia warns against GPU "kill switches"; Harvard and MIT students are leaving university because of AGI; Demis Hassabis on the future of AI
Business Insider profiles Cloudflare's CEO; Google introduces new Genie 3 world model; Why Anthropic has an edge in the AI talent war; Trump admin's pitch to Asian countries: don't over-regulate AI
I’ve been spending the past two weeks enjoying the great (and deer fly-infested) Canadian outdoors which means Computerspeak was fully and safely unplugged from the Matrix during that time. As I was doomscrolling on X on Thursday evening to catch up on all the hot takes from the GPT-5 livestream, I came across the post below from OpenAI’s Sam Altman:
This is of course about hyping the ongoing work led by Johnny Ive to build the first OpenAI device but it made me think back to a section from Karen Hao’s book Empire of AI in which she brings up OpenAI’s definition of AGI as “a highly autonomous system that can outperform humans at most economically valuable work.”
At first glance, this vision of AGI sounds thrilling until you try to make a balance sheet (#moneymindset, IYKYK) out of it. What work? Which humans? And whose P&L statement decides “economic value”? It’s one of those statements that works great in a TED talk, less so for capital allocation. The definition also smuggles in a sweeping assumption: once an AI system wins the productivity derby, humanity will simply let it drive.
Switching to Instagram for a palate cleanse, I was presented with another mini-TED talk, this time from Mark Zuckerberg. Fresh off a charm offensive targeting the world’s top AI researchers, the Meta co-founder presented his vision for superintelligence as a parallel between what he’s building and what OpenAI are offering.
In Meta’s playbook, superintelligence is less about beating you at spreadsheets and more about “help[ing] you achieve your goals, create what you want to see in the world, experience any adventure, be a better friend to those you care about, and grow to become the person you aspire to be.”
As a result, Meta’s glasses will become an embodiment of superintelligence, an exocortex that “understand[s] our context because [it] can see what we see, hear what we hear, and interact with us throughout the day.”
Shakeel Hashim, the managing editor of the Transformer newsletter, labelled his video “a pitch devoid of both vision and understanding” while Kara Swisher called it “blather.” That might be true (no matter how many gold chains he throws on, Zuckerberg will never match Altman’s charisma) but investors, regulators and boardrooms need a yardstick that maps directly to silicon roadmaps, energy bills and deployment risk.
So in this post-holiday issue of Computerspeak, I’m going to attempt creating a tighter spec of what superintelligence could practically encompass, presenting a definition that is ambitious enough to hopefully pass Hashim’s bar but still concrete enough to engineer against.
In my view, superintelligence could be achieved when we have a system which has:
A cognitive scale equivalent to a 100T parameter model (though I’ll explain below why the 100T figure should be treated with care)
An inference power draw no higher than a single household dryer (1kW), with a 10-year goal to reduce it to less than 10W.
Native multimodality: text, images, audio, video and real-time sensor data fused in one world model
These three criteria sketch a system that can reason broadly, operate cheaply and interact with the real world, without demanding GW-scale data centers for inference or sovereign-level budgets.
Let’s unpack why each matters, especially in light of the only benchmark we should truly trust: the human brain.
The human brain has roughly 86 billion neurons and a few hundred trillion synapses. While parameters aren’t synapses, it’s the closest analogy we’ve got. Current frontier models hover in the low hundreds of billions of parameters; a 100 trillion–parameter model would be a leap of two orders of magnitude, pushing toward brain-scale complexity.
More parameters mean richer representations of the world, deeper reasoning capacity, and finer-grained understanding. Thanks to sparsity tricks (mixture-of-experts, low-rank adapters), not every weight has to fire on every token. That makes a system with the equivalent of 100 trillion parameters a logical, not literal, target for frontier models. I’m sure that advances in existing approaches (see GPT-5’s universal verifier) or perhaps even new (non-generative) methods like JEPA or brain-like architectures such as spiking neural networks would make such a model not just possible, but more efficient.
In short, we must not treat the 100 trillion target as a parameter fetish, but instead interpret it as a translation of human-like representational power into the units software and hardware architects actually understand and use. I’m saying human-like, not human-level, because despite what you may have read from armchair experts on LinkedIn, neural networks are not similar to biological brains (e.g. backpropagation is likely absent in the brain).
Next up, brain power (literally!) The human brain runs on 20 watts, about what a fridge bulb burns. By contrast, a frontier model at inference can swallow tens of kilowatts per rack. That scale gap is an economic moat: only hyperscalers can afford to serve the cutting edge they train.
Shrinking the power envelope to 1kW changes the game: such a node runs off a standard data center circuit and can share a solar array with your neighbor’s EV charger. The total cost of ownership plummets, making superintelligence a line item, not a moonshot.
Sub-MW platforms fit inside a factory floor, a naval vessel, even a rural hospital or school. That breadth unlocks new markets that can be decoupled from cloud computing. And since regulators fret about compute-monopolies because power scales lock smaller players out, a kW ceiling removes some of that concern, broadening access and, paradoxically, diffusing risk.
We won’t beat biology’s energy elegance anytime soon, but kilowatt-class AI by 2030 puts us within a few orders of magnitude. And perhaps in a decade or so, just like we will increase the performance by 100s, we will simultaneously reduce the power consumption by 100x. Reducing power consumption to single digit watts is important for a simple reason: we can then fit superintelligence in battery-powered mobile devices that can be worn directly on the skin (i.e. the glasses that Zuckerberg envisions we will wear on our face) without causing first degree burns. I’m sure the “AI should run in the cloud, not on device” fanboys will hit me up in the comments, so I just want to make one thing clear: I’m not saying all models should run on smartphones or glasses. Instead, I’m arguing that it would be beneficial if they could fit on a mobile device, as we would get the positive side effect of drastically reducing the environmental footprint of inference. Energy constraints have shaped natural intelligence; meeting a similar constraint forces engineers to solve not just for raw power, but for elegant computation — something Karen Hao has advocated for too.
Lastly, we need to divest more resources away from deepfake porn generators and focus more on the research and development of “real world” world models. Humans don’t OCR a red light into text before hitting the brakes; perception and action run on the same substrate. Today’s multimodal models, despite their increased performance, aren’t enough to match the human brain’s capacity to interact with the built environment around us; in fact, they can’t even match the capacity of a cat’s brain. That’s why we need truly useful world models that are more than just pretty image generators and can absorb pixels, waveforms, tactile feedback and language into a single latent space and combine that with advanced reasoning capabilities.
If we don’t, AI will largely remain confined to the digital world which means it will still be incredibly weak when confronted with the physical reality around us. Agents that write code or book tables at restaurants for us are great but logistics drones or self-driving cars, surgical robots and climate-monitoring satellites all operate according to the laws of physics.
The brain’s trick is cross-modal grounding: seeing steam, hearing a hiss, feeling heat, and then quickly acting in accordance. Systems built on world models can fit entire vertical stacks (think imaging, speech analytics and robotic control) into one SaaS engine. Investors like revenue synergies, engineers like end-to-end differentiability. Everyone wins.
A 100T-scale world model running at 1kW is superintelligence but not the kind you’ll hear about in the steady diet of Hard Fork episodes serving up overcooked smoke and mirrors mixed with tasteless hype. Instead, it’s the kind of superintelligence that offers a real, achievable and useful path towards progress: it pushes the infrastructure layer to innovate (or reinvent) model architectures, it motivates application-layer companies to build brain scale AI experiences that run directly on your personal device, and it even helps regulators get a tangible threshold for export controls and safety audits. It also dramatically reduces the energy and water usage of today’s foundation models.
Most importantly, this yardstick decouples what superintelligence is from which jobs it kills. If the system meets these three criteria, superior economic performance becomes an emergent property, not a definitional quagmire. The market can test, compare and monetize the result, exactly what capital formation is good at.
The race is now to hit brain-scale cognition on a wall-socket budget. Whoever does it first will not just beat humans at “economically valuable work,” they’ll redefine which work is valuable in the first place, and they’ll do it without tripping the circuit breaker.
And now, here are the week’s news:
❤️Computer loves
Our top news picks for the week - your essential reading from the world of AI
Business Insider: Here's why Sam Altman says OpenAI's GPT-5 falls short of AGI
MIT Technology Review: A glimpse into OpenAI’s largest ambitions
Business Insider: Nvidia warns that any GPU 'kill switch' or 'backdoor' into its AI chips would 'fracture trust in US technology'
Business Insider: This tech CEO is trying to stop AI killing the internet. Why is everyone so mad at him?
Forbes: Fear Of Super Intelligent AI Is Driving Harvard And MIT Students To Drop Out
Wired: Inside the US Government's Unpublished Report on AI Safety
TechCrunch: DeepMind thinks its new Genie 3 world model presents a stepping stone toward AGI
FT: Trump official urges Asia to reject Europe’s ‘over-regulation’ of AI
The Guardian: Demis Hassabis on our AI future: ‘It’ll be 10 times bigger than the Industrial Revolution – and maybe 10 times faster’
Keep reading with a 7-day free trial
Subscribe to Computerspeak by Alexandru Voica to keep reading this post and get 7 days of free access to the full post archives.