Are we building machines to replace human connection? ChatGPT reaches 900 million downloads; chatbots in the classroom; Nvidia CEO travels to China; AI is reshaping Salesforce
Grandmaster beats ChatGPT at chess without losing one piece; the AI startups focused on drug discovery; OpenEvidence has built the ChatGPT for doctors; inside Helsing's race to build AI for defense
There’s a ghost in the machine, and its name is not HAL, it’s Kevin—from Hangzhou, from San Francisco, or from an “AI safety” lab with a fixation on chain-of-thought outputs. Kevin of course is not a real name; it’s just what I call the floating specter of the people shaping our digital future, who increasingly have very little in common with the people who actually use it.
Take for example the recent hiring spree in the United States fuelled by (alleged) offers of tens to hundreds of millions of dollars per head. TBPN, an online daily show about technology, has started keeping a tally of who’s getting (re)hired where, similar to how sports teams trade players.
Many of those traded are Chinese nationals or have strong academic ties to China. That’s no surprise, as China produces excellent AI researchers. But these researchers also come from a work culture defined by the infamous “996” schedule: 9am to 9pm, six days a week (although from my direct experience working in China, it’s more like 8am to 12am, seven days a week). A lifestyle optimized for maximum productivity and minimum time spent doing literally anything else, like talking to people in real life.
Silicon Valley is also no stranger to the 996 lifestyle, but they try to make it sound cooler by calling it “grinding” and “hardcore” when in reality it’s (mostly) a bunch of bros for whom interpersonal relationships outside the workplace are a tax, not a joy.
When you’re hiring people who get $100m to never leave the office, you don’t build tools for human connection, you build simulations. You build screen-time companions, algorithmic pacifiers, intimacy replacements. Which is how you get features like Grok 4’s latest innovation: a bubbly anime sidekick designed for lonely men who want their AI to blink, blush, and maybe pretend to love/talk dirty to them. It’s not a bug, it’s a worldview that starts with Elon Musk and filters down to the base of xAI.
While the AI researchers are socially alienated, there’s also a growing class of AI safety folks who are epistemologically unmoored. There’s a certain genre of AI safety research that operates less like science and more like speculative fiction with footnotes. A recent paper from the UK AI Security Institute called Lessons from a Chimp: AI "Scheming" and the Quest for Ape Language pours some cold water on these folks, pleading with them to pull their heads out of their ass and start doing some real work. It opens with warnings about “AI scheming” and draws a not-so-subtle parallel to how some scientists claimed apes were learning sign language in the '70s: a lot of anecdotal excitement, very little rigor.
The authors skewer the current scheming discourse: overinterpreted chain-of-thought outputs, dramatic fictional prompts (like “pretend you're an evil assistant trying to blackmail your boss”), and cherry-picked examples that make it into the headlines but don’t hold up to even casual scrutiny. It's science by vibes, and those vibes increasingly sound like the hallucinations of a paranoid schizophrenic.
This all would be sad if it weren’t becoming policy. These “AI as con artist” papers are often written by researchers whose only real metric of model misalignment is “it gave a spooky answer when I asked it to act spooky” (also known as following instructions.) Many come from tightly networked safety circles that talk more about "what if" than "what is." The result: we are shaping public opinion and regulation around a mix of cosplay dystopia and soft academic panic.
So here we are: AI products designed by the socially disengaged, regulated by the epistemically fragile, deployed to users who mostly just want a powerful productivity assistant and not to get catfished by a ChatGPT anime girlfriend. And we wonder why things feel uncanny.
There’s something hollow in the circuitry. Not malevolent, not evil. Just absent. A bunch of ghoulish bros where a human perspective should be.
Maybe the machines are slowly, then quickly, turning some of us into ghouls too.
And now here are the week’s news:
❤️Computer loves
Our top news picks for the week - your essential reading from the world of AI
Time: Chess Grandmaster Magnus Carlsen Beats ChatGPT Without Losing a Single Piece
Wired: Where Are All the AI Drugs?
FT: Chatbots in the classroom: how AI is reshaping higher education
Business Insider: OpenAI's chief economist says he's teaching his kids these 4 skills to prepare for the AI world
Bloomberg: Microsoft's Copilot Is Getting Lapped by 900 Million ChatGPT Downloads
The New York Times: Nvidia CEO Treads Carefully in Beijing
Forbes: This AI Founder Became A Billionaire By Building ChatGPT For Doctors
The Information: Inside Zuckerberg’s AI Playbook: Billions in Compute, a Talent Arms Race, and a New Vision for Meta
Sifted: Inside Helsing: A look behind the curtain at Europe’s AI defence unicorn
WSJ: Can Pittsburgh’s Old Steel Mills Be Turned Into an AI Hub?
The New York Times: Meta’s New Superintelligence Lab Is Discussing Major AI Strategy Changes
Sifted: Synthesia CEO Victor Riparbelli: ‘I'm very good at slapping stuff together’
Bloomberg: The New Third Rail in Silicon Valley: Investing in Chinese AI
Keep reading with a 7-day free trial
Subscribe to Computerspeak by Alexandru Voica to keep reading this post and get 7 days of free access to the full post archives.