Computerspeak by Alexandru Voica

Computerspeak by Alexandru Voica

Share this post

Computerspeak by Alexandru Voica
Computerspeak by Alexandru Voica
If we are serious about p(doom), we should also discuss p(human); inside China's plan for the future of AI; Silicon Valley freaks out over California AI bill; is the cloud ready for its AI moment?
Copy link
Facebook
Email
Notes
More

If we are serious about p(doom), we should also discuss p(human); inside China's plan for the future of AI; Silicon Valley freaks out over California AI bill; is the cloud ready for its AI moment?

Sam Altman's investment empire explained; the SPV market goes wild for AI startups; AI to hold center stage Apple's WWDC event; how Humane's Ai Pin flopped; OpenAI is giving Facebook vibes

Alexandru Voica's avatar
Alexandru Voica
Jun 07, 2024
∙ Paid
1

Share this post

Computerspeak by Alexandru Voica
Computerspeak by Alexandru Voica
If we are serious about p(doom), we should also discuss p(human); inside China's plan for the future of AI; Silicon Valley freaks out over California AI bill; is the cloud ready for its AI moment?
Copy link
Facebook
Email
Notes
More
1
Share

Shortly after appearing on Dwarkesh Patel’s podcast this week, Leopold Aschenbrenner, a former OpenAI safety researcher, posted a chart on X arguing that, given the current pace of progress with AI, we will have AGI by 2027 and maybe even “superintelligence” beyond 2030 thanks to millions of GPUs concentrated in 10 GW data centers.

On his website, Mr Aschenbrenner has written an essay describing how we’re on the path to training clusters costing hundreds of billions by 2028 and requiring power equivalent to a medium-sized US state

He’s not the only one to make such claims. The launch of ChatGPT has created a cottage industry of people whose main focus appears to be stoking both excitement and fear about AI on social media. The rot starts at a top, with many executives racing to make predictions of AI's imminent rise to surpass human intelligence and why only their company is best positioned to save us from the impending p(doom). However, amidst the hype, a critical perspective is often overlooked: the possibility that AI may never achieve human-level intelligence. This perspective is not just a contrarian view but is supported by respectable AI researchers and scientists who have highlighted the intrinsic limitations and challenges faced by AI.

There were two phases in recent history which have led us to this moment: first, the rise of deep learning and the ImageNet breakthroughs achieved from 2012 until 2015. Although this phase started just before the decade in question, the impact was felt strongly in the following years. In 2012, a team led by Geoffrey Hinton won the ImageNet competition using a deep convolutional neural network called AlexNet, significantly outperforming other methods. This marked the beginning of the deep learning revolution. By 2015, deep learning models were surpassing human performance on certain image recognition tasks. This breakthrough led to massive investments in AI research and applications across industries.

In 2015, deep learning models became just as good as humans at image classification

Then, in 2017, Google introduced the transformer architecture in the paper Attention Is All You Need. The real impact of this achievement was only felt years later, when OpenAI amassed enough data and compute to train GPT-3 and then launched ChatGPT in late 2022, bringing conversational AI to the mainstream and sparking widespread public interest and debates about AI's potential impact on society.

Yet, these successes often mask a fundamental truth: these systems excel in controlled environments with clear rules and abundant data, but they struggle with the nuanced, flexible thinking that characterizes human intelligence.

I hate to go all Gary Marcus on you in this newsletter but sometimes you have to give the devil his due: current AI systems lack the deep understanding and adaptability of human cognition, because they are simple mathematical models that make probabilistic calculations based on large data sets and access to compute.

Let’s unpack the sentence above: while it is true that the human brain is wired in a neural network similar to the artificial ones powering deep neural networks, human intelligence is not just about processing information quickly or recognizing patterns. It encompasses a broad spectrum of cognitive abilities, including abstract thinking, emotional understanding, and creative problem-solving. These capabilities are deeply rooted in our biology and experiences, shaped by millions of years of evolution.

Secondly, the data which these large models have been trained on is produced by people, and therefore limited by our ability to produce it. It is more likely that the best chart to describe artificial intelligence is not a linear pattern but more of an asymptotic curve that is always tangent to the direction of human intelligence. The other assumption made by Mr Aschenbrenner is that human intelligence is a constant on a chart while AI is permanently evolving. That is factually not true. As humans make progress in fields such as technology and science, we acquire more knowledge and get smarter.

Finally, if we want to move closer to human-level AI, 10 GW compute clusters alone are not going to get us there. Instead, we need a new architecture that pivots away from auto-regressive models and towards causal reasoning. Without this shift, AI will remain a powerful tool for specific applications rather than a rival to human intellect.

I want to close on one important point which has received little attention in my view. Unlike some people who mock the concept of p(doom), I actually believe there is an existential risk when it comes to frontier AI. There are several companies today building advanced AI systems for military applications, including self-guided missiles or drones. As these machines get more capable, we might be lulled into a false sense of security and let them operate autonomously. Sooner or later, they will make a mistake which could trigger a global conflict. So the way I look at p(doom) is not from the lens of AI reaching superhuman cognition and wiping out humanity, Skynet-style. Instead, I can see a scenario in which p(doom) goes over 50% when we rush to take the human out of the loop in military applications by thinking we have reached AGI when in reality we have not.

Nevertheless, investing in AI research should continue, but with a balanced perspective that considers its capabilities and its boundaries. AI can be a transformative tool in areas like healthcare, education, and environmental management, where its ability to process vast amounts of data and identify patterns can lead to significant advancements. However, the goal should be to develop AI systems that augment human decision-making and creativity, rather than aiming for—or scaring people with—hypothetical scenarios of an elusive human-like intelligence.

And now, here are this week’s news:

❤️Computer loves

Our top news picks for the week - your essential reading from the world of AI

  • Business Insider: China's new plan to dominate the future of tech will reshape the world

  • FT: Silicon Valley in uproar over Californian AI safety bill

  • WSJ: The Opaque Investment Empire Making OpenAI’s Sam Altman Rich

  • TechCrunch: VCs are selling shares of hot AI companies like Anthropic and xAI to small investors in a wild SPV market

  • Bloomberg: Here’s Everything Apple Plans to Show at Its AI-Focused WWDC Event

  • Wired: How to Lead an Army of Digital Sleuths in the Age of AI

  • The Economist: Robots are suddenly getting cleverer. What’s changed?

  • The Atlantic: OpenAI Is Just Facebook Now

  • CNBC: The quiet Apple executive behind Apple’s AI strategy

  • WSJ: Will Cloud Software Be Ready for Its AI Moment?

  • Bloomberg: Sam Altman Was Bending the World to His Will Long Before OpenAI

  • Wired: OpenAI Offers a Peek Inside the Guts of ChatGPT

  • New York Times: ‘This Is Going to Be Painful’: How a Bold A.I. Device Flopped

  • WSJ: How Apple Fell Behind in the AI Arms Race

⚙️Computer does

AI in the wild: how artificial intelligence is used across industry, from the internet, social media, and retail to transportation, healthcare, banking, and more

  • MIT Technology Review: This AI-powered “black box” could make surgery safer

  • Business Insider: AI in the classroom has some people worried. Teachers aren't.

  • BBC: Could AI put an end to animal testing?

  • TechCrunch: Google looks to AI to help save the coral reefs

  • BBC: Scientists enlist AI to interpret meaning of barks

  • Fortune: At this gym, customers can choose an AI best friend or drill sergeant

  • Axios: New AI system hunts for satellites behaving oddly in space

  • TechCrunch: Wix’s new tool taps AI to generate smartphone apps

  • Business Insider: How one agency uses AI to track and manage thousands of campaign assets — building its own library of 'collective knowledge'

  • TechCrunch: eBay debuts AI-powered background tool to enhance product images

  • FT: St James’s Place uses AI to spot and help ‘vulnerable’ customers

  • The Guardian: AI used to predict potential new antibiotics in groundbreaking study

  • Business Insider: Mastercard's AI system is helping banks keep fraudsters in check — and it could save millions of dollars

  • Android Authority: Even Spotify could soon get its own Gemini Extension

  • BBC: Researchers use AI to analyse cosmic explosions

  • Business Insider: 'What are your clothes made of?' is a deceptively difficult question. AI can help answer it.

  • BBC: New AI tech developed to detect heart failure earlier

  • Business Insider: How Daily Harvest used AI to optimize product packaging and improve customer service

  • The Verge: Amazon’s Project PI AI looks for product defects before they ship

🧑‍🎓Computer learns

Interesting trends and developments from various AI fields, companies and people

  • Business Insider: Ashton Kutcher is beta testing OpenAI's Sora and thinks people will probably 'render a whole movie' on it someday

  • Washington Post: How AI is helping (and possibly harming) our pets

  • BBC: Students confident AI will not replace future jobs

  • TechCrunch: A social app for creatives, Cara grew from 40k to 650k users in a week because artists are fed up with Meta’s AI policies

  • VentureBeat: Mistral launches fine-tuning tools to make customizing its models easier and faster

  • Wired: Chatbot Teamwork Makes the AI Dream Work

  • The Verge: DuckDuckGo’s private AI chats don’t train on your data by default

  • TechCrunch: Study finds that AI models hold opposing views on controversial topics

  • Fast Company: Generative AI job postings increase tenfold in the past year

  • The Verge: Google makes its note-taking AI NotebookLM more useful

  • The Information: China’s Nvidia Loophole: How ByteDance Got the Best AI Chips Despite U.S. Restrictions

  • WSJ: Meta Is Bringing Chatbots to WhatsApp in Test of AI Strategy

  • Reuters: Salesforce to open first AI centre in London

  • AP: The AI gold rush is hitting a ‘bottleneck’ that could spell disaster for Google and Meta

  • Business Insider: AI can power better product development based on consumer needs, says Yale marketing professor

  • Fortune: Generative AI copilots could promise ‘a workplace utopia’

  • Fortune: Unbabel says its new AI model has dethroned OpenAI’s GPT-4 as the tech industry’s best language translator

  • The Economist: G42, an Emirati AI hopeful, has big plans

  • Business Insider: Microsoft hired Mustafa Suleyman's ghostwriter for its new AI org, internal chart shows

  • The Information: The, Um, Psychology of, Like, AI-Generated Voices

  • Reuters: Most downloaded US news app has Chinese roots and 'writes fiction' using AI

  • Bloomberg: Elon Musk’s xAI Plans to Develop New Supercomputer in Memphis

  • The Verge: Nothing’s next phone will be all about AI

  • TechCrunch: Stability AI releases a sound generator

  • VentureBeat: Asana unveils customizable and4 intelligent AI Teammates to optimize projects and business workflows

  • VentureBeat: Writer launches no-code platform and framework for custom enterprise AI applications

  • TechCrunch: Cartwheel generates 3D animations from scratch to power up creators

  • Reuters: Onsemi aims to improve AI power efficiency with silicon carbide chips

  • Bloomberg: Apple Made Once-Unlikely Deal With Sam Altman to Catch Up in AI

  • Reuters: Chinese AI chip firms downgrading designs to secure TSMC production

  • MIT Technology Review: What I learned from the UN’s “AI for Good” summit

  • Business Insider: OpenAI keeps on poaching Google employees in the battle for AI talent

  • TechCrunch: True Fit leverages generative AI to help online shoppers find clothes that fit

  • New York Times: Can A.I. Rethink Art? Should It?

  • Bloomberg: Shutterstock’s AI-Licensing Business Generated $104 Million Last Year

  • CNBC: Elon Musk ordered Nvidia to ship thousands of AI chips reserved for Tesla to X and xAI

  • VentureBeat: SAP to embed Joule AI copilot into more of its enterprise apps, plans Microsoft Copilot tie-up

  • CNBC: Cisco-owned ThousandEyes launches AI to predict and fix internet outages, teases ChatGPT-style tech

  • Fortune: Super Micro rides the AI wave to a Fortune 500 debut

  • VentureBeat: Intel reveals Lunar Lake’s architecture, showing how its flagship AI PC processor will work

  • CNBC: Medical startup Sword Health announces AI that patients can talk to

  • Reuters: Microsoft takes its AI push to customer service call centers

  • VentureBeat: Raspberry Pi picks Hailo for AI on Raspberry Pi 5 hardware

  • Fortune: With $30 billion in lost market value and big shoes to fill, Snowflake’s new CEO bets big on AI—and on big friends like Nvidia’s Jensen Huang

  • The Telegraph: Asda billionaire owners turn to AI to reverse slump in sales

  • Fortune: AI isn’t yet capable of snapping up jobs—except in these 4 industries, McKinsey says

  • Business Insider: Microsoft exec blames Azure layoffs on the 'AI wave' in leaked memo

  • Fortune: 25-year-old Anthropic employee says she may only have 3 years left to work because AI will replace her

  • Fortune: 96% of executives are desperate for workers to use AI, but there are a few key obstacles in the way

  • Business Insider: With AI writing so much code, should you still study computer science? This new data point provides an answer.

  • Axios: AI isn't a daily habit yet for teens, young adults

  • The Verge: The CEO of Zoom wants AI clones in meetings

  • Time: The Billion-Dollar Price Tag of Building AI

  • TechCrunch: AI training data has a price tag that only Big Tech can afford

  • CNBC: Corporations looking at gen AI as a productivity tool are making a mistake

  • Business Insider: Nvidia CEO Jensen Huang says robots are the next wave of AI — and 2 kinds will dominate

  • The Verge: ElevenLabs’ AI generator makes explosions or other sound effects with just a prompt

  • FT: Tech and generational changes increase urgency of upskilling

  • The Guardian: AI hardware firm Nvidia unveils next-gen products at Taiwan tech expo

  • VentureBeat: Nvidia unveils inference microservices that can deploy AI applications in minutes

  • Reuters: AMD launches new AI chips to take on leader Nvidia

  • FT: Retraining workers for the AI world

  • Fortune: OpenAI debuts a new version of ChatGPT exclusively for universities

Keep reading with a 7-day free trial

Subscribe to Computerspeak by Alexandru Voica to keep reading this post and get 7 days of free access to the full post archives.

Already a paid subscriber? Sign in
© 2025 Alexandru Voica
Privacy ∙ Terms ∙ Collection notice
Start writingGet the app
Substack is the home for great culture

Share

Copy link
Facebook
Email
Notes
More