Researchers hype up "wildly superintelligent AI" in new paper; WSJ goes behind the scenes of Sam Altman's firing; Siemens invests in AI drug discovery; Microsoft pulls back on data centers
UK government develops AI system to help teachers; Alan Turing Institute goes through restructuring; study shows that AI challenges make companies stronger; inside Amazon's AI chip lab
The publication of AI 2027 this week by Daniel Kokotajlo, Scott Alexander, and three other authors represents the latest manifestation of AGI enthusiasm tipping into outright sci-fi. The paper offers an interactive and compelling narrative filled with colorful scenarios and dramatic geopolitical intrigue, complete with superhuman coders, espionage, and rogue AI agents. Don’t get me wrong, the Philip K. Dick mega-fan in me loved reading every word. But while it’s undoubtedly gripping as speculative fiction, it simultaneously highlights how dangerously detached the AI community has become from addressing the tangible societal challenges posed by more advanced AI systems.
AI 2027 indulges in elaborate storytelling reminiscent of slick Hollywood movies rather than focusing on practical solutions to the immediate ethical, economic, and political upheavals that powerful AI systems will inevitably trigger. It vividly details the escalating arms race between fictional companies like OpenBrain (OpenAI) and DeepCent (DeepSeek), describing AI capabilities spiraling into an intelligence explosion. This approach contrasts with what society urgently needs from AI researchers today: grounded strategies to mitigate the potential harms AI could inflict upon employment, security, and civil stability.
In its rush toward superintelligence narratives, the paper glosses over genuine solutions to core societal challenges. The scant attention paid to alignment (making sure AI goals align safely with human values) is notably thin, treating serious problems as mere plot devices rather than central issues requiring robust solutions. The implication that alignment is a background subplot rather than a main objective of AI development demonstrates a troubling drift towards spectacle over substance.
Plus, as the authors themselves concede, their projections are speculative, yet the detailed nature of their scenarios can misleadingly lend credibility to their ungrounded claims, fueling unrealistic expectations around AGI. This diversion into sensationalist storytelling detracts from critical dialogues needed now: on outcomes-based regulation, on job displacement, on growing economic inequality. I could keep going.
We’re in a crucial moment for AI development, where genuine policy leadership and interdisciplinary collaboration are essential. The industry needs less sensationalist fiction masquerading as informed speculation and more rigorously actionable proposals for managing the societal transformations AI will undoubtedly provoke. Papers like AI 2027 showcase the risks of letting the AI discourse be hijacked by gripping narratives instead of disciplined, solution-oriented inquiry.
Maybe it’s time these researchers stop getting high on their own AGI supply, and engage deeply with real-world policy and governance challenges posed by their powerful technologies.
And now, here are this week’s news:
❤️Computer loves
Our top news picks for the week - your essential reading from the world of AI
Business Insider: How do you stop AI from spreading abuse? Leaked docs show how humans are paid to write it first.
Bloomberg: Microsoft Pulls Back on Data Centers From Chicago to Jakarta
WSJ: How I Realized AI Was Making Me Stupid—and What I Do Now
The Guardian: Bridget Phillipson eyes AI’s potential to free up teachers’ time
Reuters: If AI doesn't kill your company, it will make it stronger, study shows
WSJ: Everyone’s Talking About AI Agents. Barely Anyone Knows What They Are.
Fortune: Inside Amazon’s stealthy chip lab powering its $8 billion AI bet on Anthropic
FT: DeepMind slows down research releases to keep competitive edge in AI race
Sifted: Faster, leaner, fitter: Europe’s new generation of startups
Fortune: How DeepSeek erased Silicon Valley’s AI lead and wiped $1 trillion from U.S. markets
WSJ: The Secrets and Misdirection Behind Sam Altman’s Firing From OpenAI
Keep reading with a 7-day free trial
Subscribe to Computerspeak by Alexandru Voica to keep reading this post and get 7 days of free access to the full post archives.