You don't need me to tell you how a company's advertisements can come back to haunt them. From obvious examples, like celebrity endorsements from those not yet publicly known to be sex criminals, to the less obvious.
At risk of inventing nonsense words to sound clever ala the good Doc, let's consider the buzzword-bridge that is "AI". In specific, generative programs with tangible output, and the harm it causes to a company's intangible assets.
It wasn't long ago Bud Light did a brand partnership that made a lot of people very angry over nothing worth getting angry over, but the mechanism there is my point: consumers, confederate as they are, follow the logic on a brand partnering with a trans woman. If the company is willing to do that, that means they'd be willing to do all the other things these consumers are morally opposed to.
'This company thinks trans people are valid enough to be filmed for profit. That means, on some level, they respect queer people, and I cannot stand for that.' It doesn't matter how minimal the advertisement itself is, what matters to people is that it represents what a company might believe.
But this knife doesn't just cut both ways; it's more of a directionless shiv. Advertisement is association, by which I mean the goal of an ad is to make the viewer see what a brand is and stands for. The closer it aligns to the viewer's own values, the more likely they are to engage, and then decide if they need or want your product.
Association with feelings, with kindness, with holiday cheer. Associations with supporting troops, with donations for the veterans' funerals, with bodies you aspire to have or be.
So if an advertisement is using AI, that means the company is willing to adopt current generative programs in some capacity. It means they're cosigning to the job cuts, to the undermining of vetted research, to the fake athletes and games it invents. And worse yet; because generative AI is both ubiquitous yet often just as absurd as normal advertisement affair, it doesn't matter if the company is actually using these tools or not.
All it takes is for a listener to hear something that sounds like a fake voice. To see a visual skewed and warped beyond reasonable recognition. To read a sentence so obviously fake that no human could let it pass. Yet these could just as easily be an underpaid voice-over artist phoning it in, an overworked artist slapping a project together, or a laid off writer's final of 25 quota'd pieces in a month.
Companies think they can get away with using AI so long as they structure ads as to make the difference unrecognizable, but in doing so, even the ads that were made entirely by humans are still called into doubt. After all, we know you paid for a license to use those tools; this won't be the only time you use them, will it?
Not unlike plagiarism (generative AI is simply but a form of it), the willingness to engage with these falsified shortcuts shows an unwillingness to ensure the integrity of an output, and therefor any final product a company produces.
