Computerspeak by Alexandru Voica

Computerspeak by Alexandru Voica

You can't (yet) fake influence; Anthropic's approach to red teaming; Synthesia's AI clones are more expressive than ever; Chinese companies still want Nvidia's chips; meet the AI gambling agents

AI is outperforming human recruiters; AI is getting cheaper and more expensive at the same time; AI labs struggle to keep chatbots from talking about suicide; how AI is affecting software engineering

Alexandru Voica's avatar
Alexandru Voica
Sep 05, 2025
∙ Paid
1
Share

You may have seen the ad in Vogue: a blonde model in a floral playsuit from the Guess summer collection sitting at a table in the summer sun—except the model isn’t a real person. She's fully synthetic, spun up by a startup called Seraphinne Vallora using off-the-shelf AI tools. If it feels like the creator economy is entering its uncanny valley moment, that’s because it is. And that’s exactly why we need to get clear about what kind of AI we want in this market.

seraphinnevallora
A post shared by @seraphinnevallora

For me, generative AI’s best trick isn’t replacing people, it’s compressing the cost of quality. For many years, the creative industry (whether that means music, film or other forms of art) had a dirty secret: who you knew often mattered as much as what you knew.

So when I called generative AI the “great equalizer” in my conversation with Hannah Murphy from the FT, I was thinking exactly of Seraphinne Vallora and how polished, studio-quality content at scale won’t suddenly be a Fortune 500 privilege. Used well, generative AI lets smaller brands compete on experience, tell better stories, and reach audiences they couldn’t touch before.

But there’s a meaningful line between influencers creating with AI and influencers who are AI. On one side are human creators wielding new tools. Here, a great example is journalist-creator Sophia Smith Galer building an app that helps turn written work (news articles, essays) into snappy vertical videos for Instagram and TikTok. That’s augmentation: translating ideas into formats audiences actually consume.

sophiasgaler
A post shared by @sophiasgaler

On the other side are faceless, nameless, auto-generated personalities pumping out content optimized for engagement first, everything else second. We’ve already seen AI personas rack up followers before the reveal (“Mia Zelu” posting from Wimbledon has entered the chat!), and brands are experimenting with AI clones and “digital twins.” It’s clever, yes. It’s also where trust can go to die if disclosure and quality control don’t keep up.

The market is testing both models at speed. Nearly three quarters of marketers surveyed by the Influencer Marketing Hub believe influencer marketing can be automated by AI. The pitch is straightforward: low cost, perfect control, infinite availability. But that control cuts both ways: consumers don’t like being fooled, and agencies are still wary about disclosure norms that haven’t caught up.

Until now, only companies with deep pockets could mass-produce slick video and hyper-localized creative. That moat is evaporating. If a retailer wants a hundred product explainers in ten languages, or a bank wants every report summarized on video by the morning commute, AI makes that feasible without a soundstage. Big brands are already testing the edges: H&M’s AI “digital twins,” Hugo Boss’s work with virtual influencer Imma, and a growing cottage industry of virtual-persona studios and “AI talent agencies.” Whether you love it or hate it, the capability is here, and it no longer belongs exclusively to the top of the market.

Capability however isn’t the same as connection. Early data suggest human creators still outperform on the metric that actually matters: attention that converts. Sponsored posts from flesh-and-blood influencers see 2.7x the engagement of AI personas, and they command far higher fees as a result. Why? Because lived experience translates. AI can simulate a persona; it can’t be a tired parent, have a skin condition, or taste the dish it’s recommending. That gap shows up in the numbers, and in the gut.

Here’s the awkward part. If your business model is “replace people with AI,” I wish you well, from a distance. (That distance gets infinitely larger specifically when it comes to the creeps, weirdos and ghouls building teenage-looking AI girlfriends or boyfriends.) The creator economy didn’t take off because humans were an inefficiency; it took off because audiences were tired of brandspeak and craved human judgment, taste, and accountability. The moment platforms and brands replace the human connection with fully synthetic engagement at scale, you invite perverse outcomes: influencer fraud (humans secretly swapping in AI), disclosure messes, and a flood of bizarre, low-quality spam designed to harvest clicks. We’re already seeing that “fast-food content” dynamic appear and the backlash won’t be subtle.

Contrast that with tools designed to amplify creators and operators. Translate a video into five languages while the creator sleeps? Great. Auto-answer FAQs so a solo shop can keep pace with DMs? Even better. Create safe, transparent workflows for brands to produce explainers, training, and product docs in minutes instead of weeks? That’s the point. Companies like Synthesia position themselves as infrastructure for this: give every business, large or tiny, a platform to make more engaging, informative content for knowledge sharing and entertainment, not to push humans off the stage. That’s the path that compounds trust.

So don’t confuse a production breakthrough with a creativity substitute. The winners will be the humans who use AI as leverage, who learn prompt craft (I hate calling it “engineering”), iterate faster, and take advantage of new formats and capabilities developed by AI. The losers will be the brands that outsource taste to an AI model, replace traditional photoshoots with half an hour’s work in Midjourney, and call it a day.

Platforms should require clear labeling and crack down on deceptive botfluencers. Brands should demand disclosure and favor creators who use AI transparently. And founders should build copilots, not clones. If the first slide in your pitch deck says “fire your creative team and replace them with AI,” enjoy the race to the bottom. If it says “arm every team with an AI studio in their browser,” you’re building for a world where the best ideas, not the biggest budgets or the hottest contacts, finally get their shot.

And now, here are the week’s news:

❤️Computer loves

Our top news picks for the week - your essential reading from the world of AI

  • MIT Technology Review: Synthesia’s AI clones are more expressive than ever. Soon they’ll be able to talk back.

  • Fortune: Inside the Anthropic ‘Red Team’ tasked with breaking its AI models—and burnishing the company’s reputation for safety

  • Bloomberg: Study of 67,000 Job Interviews Finds AI Outperforms Human Recruiters

  • Reuters: Chinese firms still want Nvidia chips despite government pressure not to buy, sources say

  • FT: The rise of the AI influencer

  • WSJ: Cutting-Edge AI Was Supposed to Get Cheaper. It’s More Expensive Than Ever.

  • FT: Why AI labs struggle to stop chatbots talking to teenagers about suicide

  • The Verge: Is AI the end of software engineering or the next step in its evolution?

  • AP: The success of AI music creators sparks a debate on the future of the music industry

  • Semafor: How AI will upend the news

  • Wired: Meet the Guys Betting Big on AI Gambling Agents

  • The New York Times: How ‘Clanker’ Became an Anti-A.I. Rallying Cry

  • FT: Computer scientist Geoffrey Hinton: ‘AI will make a few people much richer and most people poorer’

Keep reading with a 7-day free trial

Subscribe to Computerspeak by Alexandru Voica to keep reading this post and get 7 days of free access to the full post archives.

Already a paid subscriber? Sign in
© 2025 Alexandru Voica
Privacy ∙ Terms ∙ Collection notice
Start writingGet the app
Substack is the home for great culture