Traditional moderation systems struggle to cope with AI-generated content; Anthropic and Inflection release GPT-4 class models; why AI benchmarks tell us so little; NIST plagued by workplace issues
Adobe finds AI hype is a two-edged sword; should AI be open source?; can AI therapists be as good as the real thing?; Meta is building a large AI model for its video ecosystem
This week, four stories from 404 Media, CNBC, BBC and NBC respectively demonstrate how AI-generated content is challenging the traditional approach to content moderation used by technology companies for the last decade to manage the influx of harmful content. As AI increasingly becomes more adept at creating text, images, and videos that are nearly indistinguishable from those created by humans, the task of identifying and addressing scams, non-consensual image abuse, financially-motivated spam or misinformation is proving to be more complex than ever before.
Traditional content moderation approaches primarily focus on detecting and removing harmful content at the point of distribution, when a user uploads a video or publishes a post on an online platform. This typically involves using a combination of automated tools and human moderators to review content based on predefined policies, guidelines or community standards.
However, when it comes to AI-generated media, relying solely on point-of-distribution moderation has its limitations. Since AI-generated content can be produced at scale and with minimal human involvement, it becomes increasingly challenging to mitigate harmful content before it is disseminated to a wide audience. Additionally, this approach means there will always be a period of time between the content being uploaded and it being removed, during which real-world harms can occur at scale. In the recent example of the Taylor Swift deepfakes, it took X several days to come up with a brute force solution to prevent the sexually explicit images from continuing to go viral.
To address the challenges posed by AI-generated media, content moderation strategies must evolve to encompass the point of creation as well. This is a new approach because historically we would have never put the responsibility on a service such as Google Docs or Adobe Photoshop to prevent someone from writing a terrorist manifesto or create climate hoaxes.
But if we are going to be serious about addressing the immediate risks of generative AI, we need to consider point-of-creation moderation measures to effectively prevent harmful content from being generated in the first place.
This may include deploying generative AI-powered detection systems capable of identifying harmful media as it’s being created (by scanning image generation prompts, for example), establishing partnerships with technology companies to implement watermarking technologies that validate authentic content, and implementing content verification mechanisms at various points in the AI creation and distribution supply chain.
And now, here are this week’s news:
❤️Computer loves
Our top news picks for the week - your essential reading from the world of AI
Large language models can do jaw-dropping things. But nobody knows exactly why. [MIT Technology Review]
Inside the World of TikTok Spammers and the AI Tools That Enable Them [404 Media]
Why most AI benchmarks tell us so little [TechCrunch]
Ads on Instagram and Facebook for a deepfake app undressed a picture of 16-year-old Jenna Ortega [NBC]
Trump supporters target black voters with faked AI images [BBC]
‘He checks in on me more than my friends and family’: can AI therapists do better than the real thing? [The Guardian]
The Mind-Blowing Experience of a Chatbot That Answers Instantly [Wired]
AI Unicorn Anthropic Releases Claude 3, A Model It Claims Can Beat OpenAI’s Best [Forbes]
Should AI Be Open-Source? Behind the Tweetstorm Over Its Dangers [WSJ]
This agency is tasked with keeping AI safe. Its office is crumbling. [Washington Post]
The Lifeblood of the AI Boom [The Atlantic]
⚙️Computer does
AI in the wild: how artificial intelligence is used across industry, from the internet, social media, and retail to transportation, healthcare, banking, and more
Winemakers embrace AI and IoT tools to improve their vineyards and produce better wine [Business Insider]
MoD issues AI technology to help Colchester soldiers shoot down drones [BBC]
IBM says use of Adobe AI tools in marketing boosted productivity [Reuters]
I used generative AI to turn my story into a comic—and you can too [MIT Technology Review]
I tried Tripadvisor's free AI tool to plan trips to Maui and Montreal. It was useful in some ways, but I won't stop doing my own trip research. [Business Insider]
Kayak’s new AI features will let users double-check flights with a screenshot [The Verge]
Wall Street is always talking about AI. Here's how they are actually using it. [Business Insider]
AI designs bespoke 3D-printed prosthetic eyes [New Scientist]
How Wall Street's biggest banks are actually looking at using AI, according to patent filings [Business Insider]
OpenAI's Sam Altman says AI is a tool, not a 'creature' [Business Insider]
ChatGPT helped me renovate my kitchen. Here's how it saves me time on everyday tasks outside of work. [Business Insider]
JPMorgan’s AI-Aided Cashflow Model Can Cut Manual Work by 90% [Bloomberg]
Copilot for OneDrive will fetch your files and summarize them [The Verge]
Machine learning can be the difference between a charming picture and a masterpiece worth millions [FT]
Wix’s new AI chatbot builds websites in seconds based on prompts [The Verge]
Palantir Signs Deal With Ukraine to Use AI to Help Clear Mines [Bloomberg]
🧑🎓Computer learns
Interesting trends and developments from various AI fields, companies and people
Looks like we may now know which OpenAI execs flagged concerns about Sam Altman before his ouster [Business Insider]
Brevian is a no-code enterprise platform for building AI agents [TechCrunch]
Meta is building a giant AI model to power its ‘entire video ecosystem,’ exec says [CNBC]
NFT platform Zora is offering a novel way for AI model makers to earn money [TechCrunch]
This Tech Evangelist Has Big Dreams for AI Tutors. Are They Too Big? [WSJ]
AI could be critical to feeding a growing global population—and Big Food is taking notice [Fortune]
Hugging Face is launching an open source robotics project led by former Tesla scientist [VentureBeat]
Cognizant launches state-of-the-art San Francisco lab to boost enterprise AI adoption [VentureBeat]
Google's newest office has AI designers toiling in a Wi-Fi desert [Reuters]
Inflection AI's chatbot Pi surpasses 1 million daily active users [Reuters]
Meet Neema Raphael, the data whiz key to Goldman's AI ambitions who's overseeing the bank's army of engineers and scientists [Business Insider]
NY hospital exec: Multimodal LLM assistants will create a “paradigm shift” in patient care [VentureBeat]
India announces $1.2 bln investment in AI projects [Reuters]
Zapier Central debuts as no-code tool for building enterprise AI bots [VentureBeat]
4 things experts say could happen with AI in 2024 — and why it could be bad news for OpenAI [Business Insider]
InvGate’s AI Hub automatically generates knowledge articles from IT incidents [VentureBeat]
AI recipes are everywhere — but can you trust them? [Washington Post]
The breathtaking scope of Sam Altman's future AI empire [Business Insider]
Researchers Develop New Technique to Wipe Dangerous Knowledge From AI Systems [Time]
OpenAI’s legal battles are not putting off customers—yet [The Economist]
Amazon’s new Rufus chatbot isn’t bad — but it isn’t great, either [TechCrunch]
The surprising promise and profound perils of AIs that fake empathy [New Scientist]
AWS launches Generative AI Competency to grade AI offerings [ZDNet]
AI is the talk of the town, but businesses are still not ready for it, survey shows [CNBC]
Palantir Adds General Mills, CBS and Aramark as New AI Customers [Bloomberg]
Citi exec: Generative AI is transformative in banking, but risky for customer support [VentureBeat]
How LinkedIn's free AI course made me a better Python developer [ZDNet]
Salesforce aims to blaze new generative AI trail for developers with Einstein 1 Studio [VentureBeat]
The job applicants shut out by AI: ‘The interviewer sounded like Siri’ [The Guardian]
BofA clinches record number of patents with AI, information security in focus [Reuters]
Inside Mastercard’s multibillion-dollar AI arms race against fraudsters [Fortune]
‘The worst AI-generated artwork we’ve seen’: Queensland Symphony Orchestra’s Facebook ad fail [The Guardian]
AI Talent Is in Demand as Other Tech Job Listings Decline [WSJ]
Adobe CEO on new era of generative AI and tackling misinformation [Washington Post]
Google is starting to squash more spam and AI in search results [The Verge]
US Army tests AI chatbots as battle planners in a war game simulation [New Scientist]
AI Might Not Be the Future of Fast Food Drive-Thru Lanes After All [Gizmodo]
Top AI researchers say OpenAI, Meta and more hinder independent evaluations [Washington Post]
AI jobs charge ahead in the face of public skepticism [Axios]
Nobody knows how AI works [MIT Technology Review]
Indeed shares 4 ways companies can ensure their use of AI for hiring is fair, ethical, and effective [Business Insider]
Snowflake partners with Mistral AI, taking its open LLMs to the data cloud [VentureBeat]
Inside YouTube's new CapCut competitor and how it's trying to use AI to speed up 'tedious' editing tasks for creators [Business Insider]
Microsoft's new Orca-Math AI outperforms models 10x larger [VentureBeat]
CrowdStrike and Dell unleash an AI-powered, unified security vision [VentureBeat]
Russia's Sberbank: AI to make 60% of corporate loan decisions by year-end [Reuters]
Anthropic’s Claude 3 knew when researchers were testing it [VentureBeat]
Silicon Valley moguls are weighing in on Elon Musk's battle with OpenAI, and it's getting cattier than a 'Real Housewives' reunion [Business Insider]
Gen Z employees say ChatGPT is giving better career advice than their bosses [CNBC]
OpenAI adds ‘Read Aloud’ voiceover to ChatGPT, allowing it to speak its outputs [VentureBeat]
Why you shouldn’t rely on ChatGPT for exercise suggestions just yet [Washington Post]
OpenAI, Salesforce Sign Pledge to Build AI for Good of Humanity [Bloomberg]
Amazon adds GPT-4-beating Claude 3 to Bedrock [VentureBeat]
Elon Musk welcomes competition from humanoid robot rivals: 'Bring it on' [Business Insider]
Like 5G, telcos must seek commercial use cases to move GenAI forward [ZDNet]
Groq launches developer playground GroqCloud with newly acquired Definitive Intelligence [VentureBeat]
Colleges are touting AI degree programs. Here’s how to decide if it’s worth the cost [CNBC]
Humanoid Robots at Amazon Provide Glimpse of an Automated Workplace [Bloomberg]
Sports analytics may be outnumbered when it comes to artificial intelligence [AP]
Playing Infinite Craft Is Like Peering Into an A.I.’s Brain [New York Times]
Nvidia's AI chips boom could help the Biden administration bring semiconductor jobs to the US [Business Insider]
French police test AI-powered security cameras ahead of Olympics [The Telegraph]
Apple is right not to rush headlong into generative AI [The Economist]
Behind big pharma is big intelligence [Fortune]
Meta AI creates ahistorical images, like Google Gemini [Axios]
China offers AI computing ‘vouchers’ to its underpowered start-ups [FT]
Nvidia CEO says AI could pass human tests in five years [Reuters]
Keep reading with a 7-day free trial
Subscribe to Computerspeak by Alexandru Voica to keep reading this post and get 7 days of free access to the full post archives.