RIP to the AI grift industrial complex; chatbots are changing companionship; "vibe coding" is now a thing; Gen Z turns towards AI-powered search; the Anthropic Economic Index shows AI usage globally
AI crawlers threaten to make the internet more closed; new research could shape our chatbots' political preferences; AI is more empathetic than a customer support agent;
The most meaningful achievement of the AI Action Summit in Paris is not what happened at the event, but in fact what we didn’t see. For the first time at a global AI event, the AI grift industrial complex, comprised of alarmists, doomers, skeptics, privacy and safety fanatics, and other conspiracy theorists, was relegated to where they’ve always belonged: at fringe events, preaching to their cleverly-named think tanks and foundations, without anyone who actually matters in the world of AI in attendance.
Let’s start with the doomers. For more than two years, we’ve had to endure the droning of a vocal group who have hijacked the public discourse, spinning doomsday scenarios with the fervor of sci-fi novelists. They’ve built an entire ecosystem around their personal fears and insecurities, and then convinced themselves (and, unfortunately, far too many journalists and policymakers) that they had uncovered the ultimate existential threats to humanity.
The launch of ChatGPT gave the doomers the perfect moment to pounce: pointing to the outputs from large language models as proof that AI was on the brink of surpassing human intelligence, they started shouting to anyone who’d listen that this technology was spiraling out of control. Unfortunately, enough people listened and thus a moral panic was created that led to clumsy, shortsighted regulations like the EU AI Act and AI safety theather such as the AI Safety Summit in the UK—events that served mostly to amplify the doom-laden rhetoric, with no basis in reality. Europe, in particular, has paid the price, burdening itself with restrictive policies that are now proving to be ineffective, difficult to implement, and out of date.
However, while everyone was distracted by speculative disaster scenarios, actually urgent AI risks were pushed down the policy agenda. Non-consensual deepfakes—especially those involving children or women—were allowed to become a widespread problem. Real people suffering real harm, yet these issues barely registered in the broader AI safety debate. Why? Because tackling them required hard work, not grand philosophical musings about AGI doom. Also maybe because the majority of the AI doomers were middle-aged men who cared less about helping the people around them, and more about the perceived prestige of having been at the center of a technological revolution.
The general consensus now is that most of the doomer arguments amounted to little more than hand-waving and logical fallacies. Just today, the UK refocused the doomer-influenced AI Safety Institute to work on more practical outcomes, rebranding it as the AI Security Institute.
On the opposite end of the spectrum, we have the AI skeptics—people who have staked their entire persona on the belief that AI progress is overhyped and that every advancement is just smoke and mirrors. These individuals, often way too online and with no meaningful professional accomplishments, have built careers out of contrarianism.
Their primary skill? Cultivating a loyal band of social media followers who take every word they post as gospel. Their output? An endless stream of human slop in the form of newsletters and podcasts that repackage the same tired talking points: AI is a bubble, AI companies are scamming investors, nothing really works, and progress is an illusion. They circle the same arguments like a broken record, convincing themselves they’re the only ones who see through the “hype.” Meanwhile, the actual world—where AI is transforming industries, accelerating scientific discovery, and reshaping the way we work—moves on without them.
Then, we have privacy and safety warriors—some of whom once built their entire online persona around working for Big Tech, and now spend their time on LinkedIn rewriting history. These individuals almost always start their posts with “When I was at Meta/Google...” before launching into a self-righteous, often context-free, tirade about the supposed dangers of AI. In reality, their actual impact inside those companies was usually minimal, but that doesn’t stop them from branding themselves as the lone ethical voices fighting against the machine.
Over time, many of them spiral into full-blown conspiracy theories—insisting that AI companies are always lying about their capabilities, that regulators are being secretly controlled by Big Tech, or that the entire AI industry is a grand deception designed to distract the public. Their performative moralizing does little to contribute to real conversations about privacy, safety, or governance. Instead, they thrive on vague, alarmist platitudes and engage in endless purity tests to determine who is “ethically compromised” for simply working on AI.
I can name and shame endlessly from the categories above, but instead I’ll provide one simple rule which helps me (and hopefully you) separate the AI grifters from the people who are doing the real work. The rule goes like this: AI grifters wake up every morning thinking about what they’re going to say, while the people who actually care about AI safety wake up thinking about what they’re going to do. Here are four examples from the latter category:
Cara Hunter is a politician from Northern Ireland who was targeted by non-consensual deepfake pornography. She fought back, won a seat in the Legislative Assembly, and is pushing meaningful legislation - leading the conversation nationally and globally about the risks of deepfakes.
Rumman Chowdhury has set up Humane Intelligence, a non-profit that is working with the public sector and industry to make meaningful progress in reducing the known issues with large models, from bias mitigation to detecting toxic outputs and other forms of harmful content through red teaming
Ethan Mollick is doing amazing work educating the general public about the limitations and capabilities of generative AI, summarizing academic papers, pushing state of the art models to their limits, and engaging in honest conversations about problems with AI models.
Preslav Nakov is a professor who works on natural language processing. He has done pioneering research on fake news detection on social media, and recently published important papers on mitigating the issues with large language models. Realizing that large American tech companies weren’t building for the rest of the world, he helped create Jais, the world’s first Arabic-centric foundation and instruction-tuned large language model, which is open source and available in Microsoft Azure.
The four people above, and many others like them, could’ve easily joined the AI grift industrial complex and benefitted greatly from it (there is a lot of money to be made, if you knock on the right doors). Instead, they chose to focus on what actually matters—building AI responsibly, maximizing its benefits, and addressing real-world harms with the urgency they deserve.
And now, here are this week’s news:
❤️Computer loves
Our top news picks for the week - your essential reading from the world of AI
AI Action Summit in Paris
WSJ: EU Sets Out $200 Billion AI Spending Plan in Bid to Catch Up With U.S., China
Sifted: What France’s Stargate-style €109bn announcement means for its AI ambitions
Business Insider: JD Vance tells Europe: Deregulate or the AI revolution will leave you behind
The Guardian: US and UK refuse to sign Paris summit declaration on ‘inclusive’ AI
WSJ: Helsing, Mistral to Jointly Develop AI Systems for Military Use
Bloomberg: Google AI chief says DeepSeek’s cost claims are ‘exaggerated’
Business Insider: Don't ban AI researchers from sharing their models or you'll fall behind, Meta's head of AI warns Europe
Sifted: How the ‘Stanford of the Middle East’ is attracting top European researchers
MIT Technology Review: The AI relationship revolution is already here
Business Insider: Silicon Valley's next act: bringing 'vibe coding' to the world
Business Insider: The contest to build the dominant AI-powered search engine is now being waged on college campuses
MIT Technology Review: AI crawler wars threaten to make the web more closed for everyone
Wired: An Advisor to Elon Musk’s xAI Has a Way to Make AI More Like Donald Trump
WSJ: Why Do AI Chatbots Have Such a Hard Time Admitting ‘I Don’t Know’?
WSJ: Turns Out AI Is More Empathetic Than Allstate’s Insurance Reps
VentureBeat: Who’s using AI the most? The Anthropic Economic Index breaks down the data
The Verge: Inside OpenAI’s $14 million Super Bowl debut
Bloomberg: DeepSeek Ramps Up Hiring for Arcane AI Field as Ambitions Swell
Keep reading with a 7-day free trial
Subscribe to Computerspeak by Alexandru Voica to keep reading this post and get 7 days of free access to the full post archives.