Sam Altman Got Exceptionally Testy Over Claude Super Bowl Ads
Techcrunch
Anthropic's Super Bowl commercials, which humorously satirized OpenAI's upcoming ChatGPT ad integration, reportedly caused Sam Altman to become "exceptionally testy."
Techcrunch
Anthropic's Super Bowl commercials, which humorously satirized OpenAI's upcoming ChatGPT ad integration, reportedly caused Sam Altman to become "exceptionally testy."
AI 生成摘要
Anthropic 在超級盃推出的廣告,以幽默方式諷刺 OpenAI 即將在 ChatGPT 中整合廣告的舉動,據報導此舉讓 Sam Altman 「反應異常不悅」。
Latest
AI
Amazon
Apps
Biotech & Health
Climate
Cloud Computing
Commerce
Crypto
Enterprise
EVs
Fintech
Fundraising
Gadgets
Gaming
Government & Policy
Hardware
Layoffs
Media & Entertainment
Meta
Microsoft
Privacy
Robotics
Security
Social
Space
Startups
TikTok
Transportation
Venture
Staff
Events
Startup Battlefield
StrictlyVC
Newsletters
Podcasts
Videos
Partner Content
TechCrunch Brand Studio
Crunchboard
Contact Us
Anthropic’s Super Bowl commercial, one of four ads the AI lab dropped on Wednesday, begins with the word “BETRAYAL” splashed boldly across the screen. The camera pans to a man earnestly asking a chatbot (obviously intended to depict ChatGPT) for advice on how to talk to his mom.
The bot, portrayed by a blonde woman, offers some classic bits of advice. Start by listening. Try a nature walk! And then twists into an ad for a fictitious (we hope!) cougar-dating site called Golden Encounters. Anthropic finishes the spot by saying that while ads are coming to AI, they won’t be coming to it’s own chatbot, Claude.
Another one features a slight young man looking for advice on building a six pack. After offering his height, age, and weight, the bot serves him an ad for height-boosting insoles.
The Anthropic commercials are cleverly crafted at OpenAI’s users, after that company’s recent announcement that ads will be coming to ChatGPT’s free tier. And they caused an immediate stir, spawning headlines that Anthropic “mocks,” “skewers” and “dunks” on OpenAI.
They are funny enough that even Sam Altman admitted on X that he laughed at them. But he clearly didn’t really find them funny. They inspired him to write a novella-sized rant that devolved into calling his rival “dishonest” and “authoritarian.”
First, the good part of the Anthropic ads: they are funny, and I laughed.But I wonder why Anthropic would go for something so clearly dishonest. Our most important principle for ads says that we won’t do exactly this; we would obviously never run ads in the way Anthropic…
In that post, Altman explains that an ad-supported tier is intended to shoulder the burden of offering free ChatGPT to many of its millions of users. ChatGPT is still the most popular chatbot by a large margin.
But the OpenAI CEO insisted they were “dishonest” in implying that ChatGPT will twist a conversation to insert an ad (and possibly for an off-color product, to boot).”We would obviously never run ads in the way Anthropic depicts them,” Altman wrote in the social media post. “We are not stupid and we know our users would reject that.”
Indeed, OpenAI has promised ads will be separate, labeled, and will never influence a chat. But the company has also said it is planning on making them conversation-specific — which is the central allegation of Anthropic’s ads. As OpenAI explained in its blog. “We plan to test ads at the bottom of answers in ChatGPT when there’s a relevant sponsored product or service based on your current conversation.”
Altman then went on to fling some equally questionable assertions at his rival. “Anthropic serves an expensive product to rich people,” he wrote. “We also feel strongly that we need to bring AI to billions of people who can’t pay for subscriptions.”
But Claude has a free chat tier, too, with subscriptions at $0, $17, $100, $200. ChatGPT’s tiers are $0, $8, $20, $200. One could argue the subscription tiers are fairly equivalent.
Altman also alleged in his post that: “Anthropic wants to control what people do with AI” He argues it blocks usage of Claude Code from “companies they don’t like” like OpenAI, and said Anthropic tells people what they can and can’t use AI for.
True, Anthropic’s whole marketing deal since day one has been “responsible AI.” The company was founded by two former OpenAI alums, after all, who claimed they grew alarmed about AI safety when they worked there.
Still, both chatbot companies have usage policies, AI guardrails, and talk about AI safety. And, while OpenAI allows ChatGPT to be used for erotica while Anthropic does not, it, too, has determined some content should be blocked, particularly in regards to mental health.
Yet Altman took this Anthropic-tells-you-what-to-do argument to an extreme level when he accused Anthropic of being “authoritarian.”
“One authoritarian company won’t get us there on their own, to say nothing of the other obvious risks. It is a dark path,” he wrote.
Using “authoritarian” in a rant over a cheeky Super Bowl ad is misplaced, at best. It’s particularly tactless when considering the current geopolitical environment in which protesters around the world have been killed by agents of their own government. While business rivals have been duking it out in ads since the beginning of time, clearly Anthropic hit a nerve.
Topics
Venture Editor
Tickets are live at the lowest rates of the year. Save up to $680 on your pass now.Meet investors. Discover your next portfolio company. Hear from 250+ tech leaders, dive into 200+ sessions, and explore 300+ startups building what’s next. Don’t miss these one-time savings.
Fintech CEO and Forbes 30 Under 30 alum has been charged for alleged fraud
Two Stanford students launch $2M startup accelerator for students nationwide
Notepad++ says Chinese government hackers hijacked its software updates for months
Ring brings its ‘Search Party’ feature for finding lost dogs to non-Ring camera owners
Meet the new European unicorns of 2026
Nvidia CEO pushes back against report that his company’s $100B OpenAI investment has stalled
OpenClaw’s AI assistants are now building their own social network
© 2025 TechCrunch Media LLC.