AI and Your Brand: A Look at Magic, Slop, Risk, and Protection
How businesses can harness AI’s potential while avoiding pitfalls that damage trust and reputation
When OpenAI released GPT-5 in early August, they were putting out a product they promoted as being their most advanced model. It is their best chat merged with reasoning, supporting vibe coding, and faster than ever. It flew to Microsoft products, too, like Copilot, and technical reviews were largely solid if not spectacular.
But the company was in for a surprise. The blowback from users came fast and it was intense. People who loved GPT-4o were furious it was removed from their manual model picker. Many did ad hoc, side-by-side comparisons and believed GPT-5 was worse. Some professional users groused it broke their workflows, and that the answers were now getting were more inconsistent than before, when they could pick the model they wanted.
The router itself got quick work, with CEO Sam Altman explaining the auto-switcher had been broken.
Still, the complaints rained in.
Next, the company allowed choices to be made manually again, and then even brought back the deprecated model, GPT-4o. All in days.
On X, Altman posted that:
“We for sure underestimated how much some of the things that people like in GPT-4o matter to them, even if GPT-5 performs better in most ways.”
OpenAI announced they would make it “feel warmer” and rolled out another update August 15.
It was fast response to a social media forest fire, and just the latest demonstration of how strongly people can react around AI—for better and for worse.
The upside from AI that I’ve seen is already genuinely transformative, but as OpenAI learned, when things go wrong (and of course, they will), the response from customers can also be stronger than you might expect.
In today’s piece, I look at some other AI backlash examples, and, inspired by Julian De Freitas’s fine article for Harvard Business Review Magazine (Don’t Let an AI Failure Harm Your Brand), share my own take on ways businesses can win big with AI while still protecting their brand.
In short, you play with fire when you overpromise, outsource final judgment, or try to hide where it’s in use.
The Stakes: Brand Reputation and AI Wins
AI is no longer stuck in R&D in marketing.
Unilever, as one example, leverages Nvidia’s Omniverse platforms for digital twins, and it has transformed their content creation and slashed costs while doubling production speed.
They’ve amplified their influencer marketing using this tech, and with a new cookie-scented Dove bodywash campaign landed 3.5 billion social impressions and attracted 52% new customers.
What might have been a fringe novelty rollout was instead a startling success for their marketing.
Formosa Covers is a 35-year-old family run business, and a far smaller retailer than Unilever. But they also took advantage of AI through the Amazon Ads AI Image generator to solve a major pain point—generating lifestyle images using their products. Traditionally this had been a painful process for them if they managed it at all—lacking the time, funds, and personnel to get it done.
But with their AI work, the results are clear: a boost in clicks (+22%) and page views (+20%), and best of all, a 21% increase in orders with a 35% in return on their advertising spend.
Not too shabby.
For both these companies, these are pragmatic, actionable wins, and they come at a time when consumers are harder on AI than other technologies.
Pew research released in April shows that experts in AI are far more positive about its potential (even on job impacts) than the general public.
In a striking disconnect, public optimism is low on the AI impact at work, with just 23% finding it will have a very or somewhat positive impact on jobs over the next 20 years, versus 73% of AI experts.
And per their surveys, 66% of adults overall are concerned about inaccurate information, data misuse, and impersonation.
According to the Ipsos Consumer Tracker, three in four Americans want humans creating news content, which is not surprising with current hallucination rates. But far more surprising: two of every three want humans creating the marketing content they’re served with.
In other words, AI is already helping marketers in big ways but customers still don’t like or trust the idea of it. And of course, the way its used matters in a big way.
Public Backfires: AI Mistakes in Marketing
I opened with the surprising GPT-5 backlash, but there are plenty of examples of companies faring far worse around AI miscalculations.
Consider last year’s infamous Coca-Cola holiday ad, which attempted to use AI to pay homage to a popular 1995 version.
The company used three different AI studios (and four different GenAI models) in generating the campaign but nevertheless it became a social media firestorm for their brand.
Coca-Cola later issued a statement explaining the company embraces new technology as well as human storytellers, but it's worth noting that this was not their first AI-generated ad.
They’ve previously collaborated with OpenAI in 2023 and continue to partner with artists using AI.
In this case, the nature of the ad itself may have been much of the problem, with backlash coming from uncanny humans and the topic (the Christmas holidays, connection, family).
Julian De Freitas’s HBR piece profiles a crisis faced by the robotaxi company (and General Motors subsidiary) Cruise, after a collision that resulted in pedestrian being dragged by some 20 feet.
While the contact was unavoidable (a human driver first struck the pedestrian, throwing it before the robotaxi), the failure to immediately stop, and the lack of transparency about it caused the revocation of their permit to operate in San Francisco, a criminal investigation, the departure of the CEO and half the workforce, and a loss of 50% of the company’s value.
Within months, a driverless taxi from another company was attacked by a crowd and set on fire, and the National Highway Traffic Safety Administration opened investigations of numerous robotaxi developers (including those from Google and Amazon).
Before 2024 was over, GM ended their involvement in the robotaxi business entirely.
There are numerous such stories, like Air Canada being found liable for the erroneous claims of an AI chatbot and the termination of McDonald’s AI drive-thru tests last June (first announced in 2019 but made infamous on social media), even as other fast food companies are rolling out new tests today.
There is ample evidence that people judge AI mistakes far more harshly than human ones, but the way you promote and share your AI use (not to mention how you use it in the first place) has a lot to do with the severity of the reaction.
How to Avoid AI Marketing Failures While Gaining the Boost
As De Freitas points out in his article:
People blame AI first, given the same cases with humans or AIs at fault. (This is also true in Chinese research, demonstrating it runs across cultures.) We tend to imagine a best-case human when comparing ourselves to AI.
An AI failure often causes people to lose faith in other AI systems, as demonstrated by researchers. While we understand that every human is unique and that one person’s failure cannot necessarily be generalized, we are far more likely to assume that all AIs repeat mistakes in the same ways, due in part to how little we understand their workings.
The more a company overstates what AI can do, the more blame they receive when it fails.
Likewise, the more you’ve humanized your AI (anthropomorphizing it), the more harshly people can judge its mistakes. This may be surprising, given that we are more tolerant of human mistakes.
Programmed preferences provoke outrage. Group-based preferences, such as Mercedes-Benz’s safety ranking of passengers ahead of pedestrians, or other pre-programmed ranking of values (such as protecting the young before the old) have been consistently found to infuriate tested groups.
In light of such reactions (and AI’s own behavior), I advise the following:
1) Start where AI has a clear, provable lift
AI can be stunningly effective at generating visuals, as mentioned above. It can be a lifesaver for companies needing to vary product visuals, either to demonstrate use or maintain freshness. It excels at backgrounds, context shifts, platform-specific variations. The same holds with short videos and audio.
It’s a powerful tool for summaries, tagging, first-drafts, suggesting changes of tone or approach—but AI-only copy cannot go live on its own.
It must always be human edited. We find our own AI automation thrives where the error cost is lowest (where double-checking is already in use or else not needed), and in all other cases it must be reviewed, or the costs can be high.
In one example, we made a human mistake (putting the wrong version of a graphic live), and readers immediately assumed the issue was caused by unchecked AI.
So-called human-review latency, or the time from AI draft to ship, can be measured, but it’s still essential and cannot be compromised.
2) AI and consumer trust: Publish your AI content standards and label outputs
The bottom line: AI transparency is critical.
Customers reward clarity and punish deception. Increasingly, platforms are labeling and filtering AI content, even when created off their own platforms.
This same standard can be applied with tags like “includes human reviewed, AI-generated imagery,” and it benefits you to make your AI content standards available online.
The Coalition for Content Provenance and Authenticity (CP2A) was created by Adobe, Arm, BBC, Intel, Microsoft, and Truepic back in 2021 (now steering includes Amazon, Google, Meta, and OpenAI), and it provides an open technical standard called Content Credentials for responsible AI use in digital imagery.
This is compared to a nutrition label for your content, and Adobe and many digital cameras (and some smartphones) include it natively.
Maintaining consistent AI transparency for brands helps avoid the kinds of social media backlashes we’ve seen for companies trying to pass automated work off as human.
3) Don’t inflate AI expectations
In marketing it’s natural to boast that you’re better than the competition, but when it comes to promoting AI capacities to end customers, bigger isn’t always better, and promises must be able to be backed up.
Avoid claiming AI outcomes are absolute, guaranteed, certain, flawless.
Better is providing examples of independent testing or snapshot examples of real, individual outcomes with AI’s stochastic nature clearly explained.
AI-washing cases have put scrutiny on some big tech companies for their AI claims, and the burden of proof may end up on you, especially in regulated industries like healthcare.
It’s true that AI can already bring enormous benefits to your bottom line. What’s less true is that it’s guaranteed to always behave in the same way.
4) Putting a human in the loop is more than just AI marketing ethics
The above is exactly why human oversight is necessary and for more than just optics.
Anywhere you can’t afford hallucinations or periodic failures, human approval is critical.
Many of the worst branding woes and much corporate pain have come from failures at this step, and as I have often written, AI will not stand behind its own work and take responsibility.
A human being must do this, and it mitigates more than just marketing risks. The trick is working this into your workflow in a way that can still gain on AI’s ability to accelerate and scale.
5) Be ready to handle AI failures
This one is strange because we are always ready to handle human failures. We assume them, in areas where it matters.
Rarely is a single person the sole check of content, and if that is the case, mistakes are sure to slip through.
The same is true with AI. Thinking of an onboarding model for employees is only reasonable as well with new AI systems. Having a way to roll back, pause, or revoke permissions is essential, and it must be easy and well documented.
Protecting Your Brand with AI: Quality Staffing
You need AI to keep up in today’s business landscape, and in marketing it can be a complete game changer. But if anything, it increases your need for quality people with certain skills.
People who don’t just reluctantly use AI, but understand it, can innovate with it, and have experience with practical applications, benchmarking new releases, and effective safeguards.
Whatever your AI brand strategy, you need expertise to make it a reality.
I’m both an AI enthusiast (CEO of an AI-first company), and a provider of top tech talent with nearly 30 years of experience. I understand how the two go together, and it’s a mistake to believe you can have excellence from AI without their union.
Conclusion: Innovate with Care
AI in marketing is here to stay. But it’s not suitable for unchecked, final output, and even if it were, what would make your brand stand out in a world where everyone had it?
Still essential are those who understand how and what to say and understand how to leverage AI to do things that were not previously possible, and at a far greater scale.
At the same time, it’s critical to understand human sentiment on this technology—sometimes coming from fear, sometimes from a misunderstanding of how it works, and sometimes from run-ins with bad implementations by companies that are rushing to market.
Understanding that it will fail and knowing how you handle that before it happens is critical to getting AI’s benefits without pain. Also being transparent and outspoken about its use.
Trust me when I say, with these safeguards, it will take us places we can barely imagine.



