AI use is skyrocketing, and new tools are entering the scene every day. No wonder that many leaders are concerned about the ethical implications of AI in marketing.
To drill into that topic, Goldcast and the AI Marketing Alliance recently hosted a focused summit on AI ethics in marketing, featuring renowned experts, such as WPP Chief AI Officer Daniel Hulme, ethicist and CEO Olivia Gambelin, and AI policy adviser Kerry Sheehan.
This article discusses various topics presented at the symposium and some important ways that marketers can responsibly implement AI.
Three Foundational Pillars of Ethical AI
Let's kick things off by talking about the core pillars you can use when implementing ethical AI practices in your marketing organization, as explained by George Samaras, director of marketing ops and technology at Coveo:
- Governance. Be sure that you're not sharing your private company data with freemium AI models that are publicly available to all. Work with your legal team to understand what you can and can't use AI to do right now, and let them carefully review the use cases you're thinking about before you continue.
- Human oversight. Don't let AI run wild without your supervision! Some things, such as marketing email campaigns, can't exactly be recalled if AI goes off course. Be careful, and always have a set of human eyes on everything you publish or send out.
- Auditing and reporting. Monitor your AI tools and their respective outputs to be sure they're doing what you want them to do. Share your data with others internally and review to see whether you're noticing aberrations or AI hallucinations; that might be an indicator your prompts need to be refined.
Understand the Global AI Regulatory Landscape
Kerry Sheehan, an award-winning machine-learning developer, advises governments and businesses on AI. She encourages marketers to get familiar with the global AI regulations that apply to your industry and company.
That means reading up on Europe's General Data Protection Regulation (GDPR) and ensuring compliance if your business is in the EU. For U.-based businesses, there are sector-specific AI regulations, and some industries are more tightly controlled than others.
There are also longstanding FTC guidelines on algorithmic transparency; for example, the Equal Credit Opportunity Act requires there be transparency around automated decision-making processes.
You must be aware of the specific regulations that apply to your business so that you can follow them as you adopt and apply AI.
Digital Twins Will Soon Be Used Professionally and Personally
A "digital twin" is a virtual representation of a person, process, or system. Creating a digital twin of yourself entails training a large language model on your digital footprint, emails, calendars, Slack forums, feedback, and other information about you.
If you have a digital twin of each team member, you can ask the twin questions, such as, "Will you work well on this project?" or "Would you work well on this team?" The answers will give you a sense of how changes might affect your team—before you actually make those changes.
Daniel Hulme, a global AI expert with 25+ years of experience, shared that digital twins aren't just for the professional world; they'll be used personally, too. Someday, we might all have digital twins on our phones, learning from our data and getting a sense of our hopes and dreams.
Once the twins are savvy enough, we will give them the power to make purchases for us—which means marketers will have to figure out how to market to AI/digital twins, too, not just humans.
Create AI Leadership Roles and Committees
Ashley Cheretes is director of generative AI at Prudential. As an AI leader, Cheretes oversees a responsible AI program that's designed for ethical frameworks around how teams enhance capabilities and use AI within the company.
At the enterprise level, Cheretes ensures that Prudential's values are the guiding light behind every AI choice. AI is used across departments, and a cross-functional team (including Legal team members) comes together to make decisions around how and when AI will be used.
There are clear, written guidelines around what can't be done with AI—for example, Prudential doesn't use digital twins.
Hiring a dedicated AI leader may not make sense for all organizations. In those cases, consider forming cross-departmental AI committees to help get on the same page with AI use and the ways AI can best benefit the company.
Develop Responsible AI Strategies With 'The Values Canvas'
AI ethicist Olivia Gambelin shared a tool called The Values Canvas, which is a template you can use to develop responsible AI strategies and document your ethics strategy.
With The Values Canvas, you aren't just looking at AI—you're looking at people, process, and tech. There are three different elements for each category, with each one representing a different resource that's needed to support AI use and development.
You can download this tool to walk through it. Consideration of the full picture, as laid out in The Values Canvas, is what's needed to do AI well. Without it, you're simply guessing and hoping that things work out. And an ethical approach to AI in marketing requires more from us than that.
More Resources on the Ethical and Responsible AI in Marketing
AI, Ethics, and the Law: How to Stay Aboveboard
How Brands Can Responsibly Roll Out AI as Government Regulations Tighten
AI-Washing: Are Buyers Being Taken to the Cleaners?
How Marketers Are Getting AI All Wrong (And What to Do About It)