Anthropic’s Super Bowl ad, one of four commercials the AI lab launched Wednesday, begins with the word “BETRAYAL” boldly splashed across the screen. The camera pans to a man earnestly asking a chatbot (apparently meant to display ChatGPT) for advice on how to talk to his mother.
The bot, played by a blonde, offers some classic advice. Start by listening. Try a nature walk! And then it segues into an ad for a fictitious (we hope!) cougar dating site called Golden Encounters. Anthropic finishes the spot by saying that while ads are coming to AI, they won’t be coming to their own chatbot, Claude.
Another features a petite young man looking for advice on how to build a six. After the bot gives him his height, age and weight, it shows him an ad for height-enhancing pads.
Anthropic ads are cleverly created by OpenAI users after the company recently announced that ads will be coming to the free tier of ChatGPT. And they caused an immediate stir, prompting headlines that Anthropic was “mocking”, “dirtying” and “soaking” OpenAI.
They are so funny that even Sam Altman admitted to laughing at them on X. But he obviously didn’t think it was funny. They inspired him to write a novella-sized rant that devolved into calling his rival “dishonest” and “authoritarian”.
In this post, Altman explains that the ad-supported layer is meant to bear the burden of offering free ChatGPT to many of its millions of users. ChatGPT is still the most popular chatbot by a wide margin.
However, OpenAI’s CEO insisted that they were being “dishonest” when they suggested that ChatGPT would twist the conversation to insert an ad (and possibly for a product with a different color). “Obviously, we would never display ads the way Anthropic displays them,” Altman wrote in a social media post. “We’re not stupid and we know our users would reject it.”
Techcrunch event
Boston, MA
|
June 23, 2026
Indeed, OpenAI has promised that ads will be self-contained, labeled, and never affect chat. But the company also said it plans to make them conversational-specific — a major allegation of Anthropic’s ads. As OpenAI explained on their blog. “We plan to test post-reply ads in ChatGPT when there is a relevant sponsored product or service based on your current conversation.”
Altman then lobbed several equally dubious claims at his rival. “Anthropic serves rich people as an expensive product,” he wrote. “We also think we need to bring AI to the billions of people who can’t pay for a subscription.”
But Claude also has a free chat tier with $0, $17, $100, $200 subscriptions. ChatGPT levels are $0, $8, $20, $200. One could argue that the subscription levels are fairly equivalent.
Altman also claimed in his post that: “Anthropic wants to control what people do with AI” He claims to block the use of Claude Code by “companies they don’t like” like OpenAI, and said that Anthropic tells people what they can and can’t use AI for.
It’s true that Anthropic’s entire marketing deal from day one has been “responsible AI.” The company was founded by two former OpenAI alumni who said they were concerned about AI security while working there.
Still, both chatbot companies have usage policies, AI guardrails, and talk about AI safety. And while OpenAI allows ChatGPT to be used for erotica, while Anthropic does not, it has also decided that certain content should be blocked, especially when it comes to mental health.
Still, Altman took this Anthropic-tells-you-what-to-do argument to an extreme level when he accused Anthropic of being “authoritarian.”
“One authoritarian society won’t get us there on its own, not to mention the other obvious risks. It’s a dark road,” he wrote.
The use of “authoritarian” in a rant over a cheeky Super Bowl ad is misplaced at best. This is especially tactless given the current geopolitical environment in which protesters around the world have been killed by agents of their own governments. While business rivals have been duking it out in commercials since the beginning of time, Anthropic has clearly struck a nerve.