Maybe one day I'll regret saying this as I'm begging John Connor to let me in one of his bunkers to escape the machines, but here goes: Marketers, there is no real AI, let alone an inherent threat from it.
And so-called AI doesn't threaten anything. The people developing and using it do.
AI Misconceptions in Marketing
Although plenty of tech critics, academics, and scientists outside the marketing realm seem to have clear bearings about so-called AI, what I'm seeing and hearing from most of the marketing world feels very one-sided, and not very critical.
On the one hand, marketers are jumping on the bandwagon, just like we do for any other fad, rushing to claim the right to be among the first to master the tech before grappling with whether it actually adds value (or is ethical). On the other, we're freaked out about the threat of its taking our jobs or transforming our marketing departments outside of our control.
It's frustrating to watch: While unions strike and artists file lawsuits over this issue, marketers wring their hands about so-called AI threatening marketing jobs and making our marketing departments obsolete, while using AI to do marketing.
I don't quite understand that dichotomy, or how marketers seem hell-bent on contributing to a constructed problem that makes our professional lives worse, self-fulfilling a fate of our own making.
And although that's a sentiment Sarah Connor could get behind, the tech isn't exactly a Cyberdyne-level breakthrough.
The Reality of Generative AI
Focusing on what most of the marketing world is using—so-called generative AI—the tech is, as leading scholar Emily Bender puts it, nothing more than algorithms that are "haphazardly stitching together sequences of linguistic forms…according to probabilistic information."
They don't learn, nor do they reference any meaning at all. They're expensive, energy-hungry autocompletes—literally, in the case of large language models (LLMs), and figuratively, in the cases of image and video generators.
Dr. Bender co-authored a seminal paper called "On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?" which seems to be required reading in many communities seeking meaningful dialog about LLMs and AI. But I haven't heard a peep about it in marketing circles.
I have a hunch that's in part because of the term "stochastic parrots." Although that is a near-perfect term for LLMs, a marketer trumpeting expertise of a tool with a name that indicates backward-looking ideas that repeat old stuff wouldn't exactly stand out on LinkedIn.
AI's Impact on Content, Creativity, Marketers, and Marketing
It'd be one thing if our irrationality resulted in something that transformed the value we deliver to our brands and clients. But, so far, the output is filled with errors and biases, and it's straight-up crap content that's probably, in large part, stolen.
I'm not sure what generative content you're consuming, but from what I see, no one with a modicum of taste wants any part of it (including, it should be more-than-noted, our customers).
But, with our biases firmly implanted, too few among us want to admit that what we're bragging about mastering is little more than a bullshit generator. Or as someone in my Mastodon feed called it, "mansplaining as a service: superficially plausible but totally fabricated nonsense presented with unflagging confidence."
I wish I had come up with that. Maybe I'll just start saying I did—you know, as is my prerogative as a man. If I can't get away with that, I'll just say I came up with it after asking an LLM, which is, apparently, today's acceptable off-ramp from plagiarism.
Anyway, if you or your teams are using these digital mansplainers not as actual content creators but as brainstorm partners, content kick-starters, or pitch/demo creators to be faster than taking the time to go through these processes the old-fashioned way, then it seems to me you're abandoning the value that separates your work from the commoditized.
Those processes are what give content meaning, fill in gaps, and ignite connections. Abandon that, and you're abandoning the creative process itself. Creativity is process, and it beats bullshitting any day.
Marketers Are Being Used
AI companies are using us and these well-honed processes in a sucker's game: By operating these algorithms, you give them more information to increase the probability that they sound and look more correct. That, of course, results in more reasons to continue to fear the worst about our future as marketers. As a bonus to this inside-out doom loop, you're facilitating the stealing of your brands' and your clients' hard-earned copyrighted content.
It was bad enough when we were providing all that to the AI companies for free. Now we're actually paying them for the privilege. Make it make sense.
In what should be the clearest manifestation of this doom loop, take marketers' incessant practice of contributing—for free—to LinkedIn's collaborative articles. Why are you doing it? I'll tell you: so LinkedIn can build its algorithms to write articles for you. You're helping LinkedIn make you obsolete.
Making Our Work Harder and Undermining Marketing Integrity
It's also glaring when marketers sing the age-old tune about how difficult it is to "cut through the noise" while using these algorithms to help make exponentially more noise. That's one of the more urgent calls to action about so-called generative AI: More content, less time. "Don't fall behind! Your competition is going to be more efficient than you."
No one seems terribly concerned about the tradeoff, though: In this rush to use a tool that spits out more meaningless content than has ever been possible, we're making it harder for search engines to find our and our clients' brands.
And in the process we're making the world in general a little shittier, too, as people everywhere are more challenged than ever to find accurate information online.
Is this why you got into marketing? To be a spammer? To leave the digital landscape worse than how you found it? I shouldn't have to point this out, but we should try to elevate our craft above the stereotypes of clickbait and spam-slinging—if we want to remain relevant. And fulfilled.
Please hear me when I say that what you do as a profession is so much more worthy than what the companies who hype this technology want you to believe. By doing what they tell you, you're commoditizing your work and doing theirs for them.
Ethical Concerns About Using AI in Marketing
I also need help unraveling what I see as ethical hypocrisy when we use these tools as marketers, as well.
AI and Inclusion: Bias in Algorithms
Here's one that screams the loudest to me: If you're interested in inclusion and equity, like so many of us proclaim for our brands and clients, with our DEI pledges and MLK quotes, and links to inclusive marketing courses we've completed, publishing content from these algorithms participates in perpetuating real, tangible harm by way of racial, gender, and other biases.
So serious is this issue that it motivates NYU professor and author Meredith Broussard to call algorithmic bias "the civil rights issue of our time."
I encourage you to dig into this topic if you haven't already and ask yourself what your ethical duty is when you knowingly publish content of this ilk or even give the tools more juice by using them at all.
Here are a few more names to follow if you're interested in this issue: Abeba Birhane, Rediet Abebe, Rumman Chowdhury, Safiya Noble, Timnit Gebru, Karen Hao, Mehtab Khan, Joy Buolamwini, and Seeta Peña Gangadharan.
Read about them, and you'll get an undeniable sense of déjà vu: The people most trying to sound the alarms about the ethical issues of so-called AI are black women and other women of color, and—surprise—they're being ignored.
Sustainability and AI: Environmental Considerations
Speaking of alarms: The snooze button on the climate change doomsday clock that we keep hitting? AI isn't exactly Greta Thunberg on that front.
One of the more popular tools, ChatGPT, currently consumes the same amount of energy as 33,000 homes, and even those profiting from this enormous cost to our livable world are concerned about the energy crisis they're driving us headlong into.
I have no doubt that many a marketer has asked these algorithms to generate or help them generate ESG or other green content for their brands and clients, burning four to five times more carbon than a search query would, using a model that took as much carbon to train as the lifetime of five cars (including the manufacturing of them).
Likewise, I'm sure some have asked an image generator—which research has shown to be more energy-hungry than LLMs—to create a background image for an Earth Day ad or social media post, burning as much carbon as charging a mobile phone in the process.
Some of us are just doing this for fun: Hey look! It's Olivia Rodrigo with wrinkles! It's Elderly Mutant Ninja Turtles!
From my perspective, so-called generative AI is just energy-hungry algorithms in a competitive industry with a really effective hype machine to drive investor interest. They do so with threats and bold claims about their power and ability, counting on the oldest of human biases that language equals intelligence, and the oldest of marketers' fears about falling behind the curve, to succeed.
In the end, it seems we marketers have been the victims of effective branding. To marketers, AI should stand for "Actual Irony."
Choosing Integrity Over Trends, Rejecting Flawed AI Tools, Taking Practical Steps
So, what to do? First, recognize that what you're worried about is not inevitable. You don't have to surrender your craft or the joy you find in it to Silicon Valley. Nor do you have to surrender your ethics. You have a choice.
Once that's behind you, you can choose to not use so-called generative AI. It's OK. You really can opt out and still stay competitive and innovative. Let your competition race to the bottom. You keep increasing your value.
Next, you can tap into that rebel spirit that's somewhere in all marketers and actively resist the hype. Talk it back down to reality when you can. Wrestle the narrative away from tech billionaires and VC fund managers. Point people to the scholars and activists who use peer-reviewed work to articulate the actual, here-and-now abilities and dangers of the tech.
Then: follow, support, and amplify the work of some alternative AI groups like The Distributed AI Research Institute (DAIR) and Black In AI.
There are ethical paths to AI that brilliant people are dedicating their professional lives to, and the loudest voices don't have to be the only ones in this process.
And last, demand that AI policy include some teeth and not just PR platitudes. (Marketers, of all people, certainly know how to spot those.) Specifically, we should demand that AI companies be transparent about their training data sets.
If we know where the people developing these tools are choosing to scrape the training data from, we can hold them accountable to applicable laws and allow objective parties the pathways they need to evaluate and fix the systemic biases in the algorithms.
How? For one, supporting the AI Foundation Model Transparency Act. We can communicate with our representatives about it, but locally we can also request support for this issue and other transparency actions from our professional associations.
Why hasn't the Association of National Advertisers, or the Public Relations Society of America, or the American Marketing Association taken a stand on this issue?
By demanding associations take up the larger issues instead of just providing tactical how-to seminars, we can ask that they actually represent us, not algorithms and the corporate interests behind them.
If politics and community agitation aren't your jam, can we at least think and engage in dialogue critically with each other? Make some room for contrary information?
Maybe the start we need is to just help each other not be suckers to bad marketing.
More Resources on AI's Impact on Marketing
Generative AI: What Keeps Me Up at Night
Marketers' Biggest Concerns About AI
ChatGPT Has Turned 1: What Have We Learned From AI's Breakout Year?