Navigating AI’s Tensions to Create a Roadmap for AI Literacy
FM (Friday Morning) Reflection #25
In this series
In my last Reflection, The AI Skills Gap Is About To Get Real, I wrote about how generative AI tools are fast becoming standard in our digital toolkit—moving from niche products to core offerings in Microsoft 365 and Google Workspace—with huge potential impacts to the workforce. Simply put, generative AI will become integrated into how work gets done—whether you choose to use it or not. I believe that many people will choose to use it and the AI-non-adopters will find themselves left behind.
I argue that this shift creates an urgent need for AI literacy, and a reader asked good questions: “What would AI literacy look like in practical terms? Would it parallel the concept of internet literacy, or would it require a distinct framework given the unique complexities of AI?”
My sense is that AI literacy will integrate several aspects of media literacy, and in my recent essay on authenticity, I started to unpack the unique challenges inherent to AI.
My experience with AI is that when you reach a new level of achievement, and then look up to get your bearings, you discover that the landscape ahead of you is new and unexplored. You begin to learn and adapt as you cover more territory, and answers start coming to questions you had at the beginning of the journey.
To move the conversation from concept to specific on what AI literacy might look like, I have started to create a list of the tensions at play with AI, and over the course of the next few Reflections, will use that to inform a framework for how to make decisions about those tensions…like “how do I know if that snake is poisonous or not?” (BTW: Contrary to what I learned as a child, the shape of a snake's head isn't a reliable way to tell...)
Three tensions that first come to mind related to AI are:
Creator vs. Consumer Responsibility
Personal Voice vs. Synthetic Persona
Disclosure vs. Comprehension
Creator vs. Consumer Responsibility
A friend and I recently engaged in a lengthy debate about the use of AI-generated voiceovers, and specifically, the AI version of sportscaster Al Michaels who delivered highlights from the Paris Olympic Games on Peacock.
Creator Perspective: Some would argue that as long as content is labeled as AI-generated, then it's OK to use. In the Al Michaels example, NBCUniversal disclosed that each “Daily Olympic Recap” was powered by AI and that Michael's voice was synthesized. But questions remain: How often should that label appear? How obvious does it need to be? Do disclaimers truly protect consumers from confusion, and more fundamentally, is that even the responsibility of a creator?
Consumer Perspective: Others believe that media literacy must grow with AI’s prevalence. Similar to reading a nutrition label, we will need to learn how to spot—or at least question—the authenticity of what we see and hear. It’s not enough to rely on a faint watermark or disclaimer at the bottom of a screen. If we know AI is in the mix, we can start asking better questions: “Who created this?” “Why was it created?” “What does the creator have to gain?”
In reality, both sides will need to step up. Producers will need guidance on when clear disclosures are necessary and ethical issues; audiences need to become savvy consumers, capable of spotting signals that might indicate the content is AI-generated.
Personal Voice vs. Synthetic Persona
This tension goes straight to the heart of identity and authenticity. Let's take my own work here at Do Good by Doing Better. Should I create an AI-powered voice replica of myself to narrate each article? That could streamline production, cut down on background noise interruptions, and save time. On the other hand, would it still feel like me and fully represent my voice—both literally and figuratively?
I wonder if readers and listeners really value hearing the “real me” with every vocal imperfection—or is it more important to open the content up to a larger audience through audio versions of each FM Reflection?
For some experiences, like an automated weather forecast or a bot to take your carry out order from the neighborhood pizza parlor, it might not matter who’s speaking. But for personal reflections or creative expressions, we typically expect the person behind the words to show up in full—imperfections and all.
This tension isn’t just about audio. It extends to written pieces, social media posts, even entire personas built online. If a brand or personality automates its feed with AI without oversight, is it still them?
Disclosure vs. Comprehension
Finally, there’s a gap between stating something (disclosing AI involvement) and whether the audience truly understands it. A good analogy is a nutrition label, which is technically on every food product—but how many people actually read it, let alone interpret it correctly?
In the Al Michaels AI example, yes, there were disclaimers. But was the average viewer aware—really aware—that the voice was synthetic and not Al Michaels in a studio somewhere?
When disclaimers fail, or when they’re just enough to cover legal obligations, consumers may not realize they’re interacting with an algorithmic creation. Over time, the risk is that this erodes trust in the media. Even the content delivered genuinely might be called into question.
This tension underscores the importance of education. If people don’t know what “AI-generated” implies—how easily it can replicate a voice or mimic writing style—a “Disclosed by AI” tag might not have much meaning. True comprehension demands baseline knowledge about what AI can and cannot do, where it excels, and where it still fails.
Where This Leads Us
These tensions are just a snapshot of the friction points we face in the AI era. By identifying them, we can start to develop practical strategies:
For Creators: Establish clear policies on disclosure. Decide how you’ll use AI in ways that preserve authenticity where it matters most.
For Consumers: Cultivate a mindset of curiosity—ask who, what, when, and why. Learn enough about AI’s capabilities to spot red flags or missing context.
For Everyone: Recognize that ethics and comprehension go hand in hand. A well-intentioned label or policy means little if we don't pay attention or understand it.
Over the coming weeks, I’ll explore more of these tension points and propose ways we might resolve or navigate them. My ultimate goal is to shape a practical AI Literacy Framework—one that helps us handle the ethical, creative, and social dimensions of living and working alongside AI every day.
What are your thoughts and where do you see tensions? Please put them in the comments so that we can discuss!
And as always, have a great weekend!
Photo Bonus
A conceptual framework and the supports of a building both provide strength and resilience. A well-defined conceptual framework reinforces an idea, while robust building supports provide a physical structure and protect what’s inside. The framework of this building – Cherry Street Pier on the Delaware River waterfront in Philadelphia – is evident and elegant in this backlit scene.