You’re Not Behind on AI—You Just Need a Better Map
It's natural to feel simultaneously confused, and afraid of missing out on the AI wave. Here's a perspective that will help you find YOUR WAY to benefit from AI.
Everyone seems to be using AI for something—writing, coding, researching, designing. And if you’re not, it’s hard not to feel like you’ve missed the memo.
There’s pressure in the air: Am I falling behind? Am I doing this right? What even is the “right” way to use these tools?
One moment, AI looks like a breakthrough. The next, it feels like a black box that outputs nonsense with surprising confidence. It’s confusing. And oddly lonely.
👉 If you feel this way, you’re not alone.
There’s no silver bullet here—no perfect prompt or one-size-fits-all use case. But there are small shifts we can make to better understand and benefit from this technology. One of the most powerful is surprisingly simple: the metaphors we use to think about it.
Using Metaphors as a Discovery and Innovation Superpower
Metaphors are literary devices—but they’re also reflections of the mental models we lean on when we’re trying to make sense of something unfamiliar. And right now, large language models (LLMs) are still unfamiliar for most of us (I’d suggest all of us). They’re powerful, yes—but also opaque, not fully understood, and easy to misjudge.
How we think about them shapes how we interact with them. The metaphor we adopt can silently define what we expect AI to do, how we feel when it gets something wrong, and what role we let it play in our work and lives.
So today, I want to offer you a lens—not to tell you how to use AI, but to help you find your own way. This is so important, because when we choose our metaphors thoughtfully, we open up better, more grounded ways of working with these tools.
Let’s explore five metaphors—and the subtle ways each one can help (or hinder) how we use LLMs in real life.
Five Metaphors That Shape How We Use AI
🚫 The Intelligence Fallacy — AI as a human-like mind
This is the metaphor that gets us into the most trouble. Treating LLMs like sentient thinkers makes their output feel magical—until it doesn’t(1). They’re confident, articulate, and occasionally insightful… but they don’t reason, understand, or know. Expecting intelligence leads to misplaced trust, and that trust breaks fast under pressure.
Use this metaphor, and you're constantly wondering why your “smart assistant” keeps making silly mistakes. LLM’s don't work for most of use cases that start from the perspective that they are intelligent agents.
For a more in-depth analysis, check out this paper published by Apple’s ML division on the weaknesses of present LLM’s.
✅ The Library — AI as compressed knowledge
When exercising this metaphor, we think of LLMs as glorified librarians with a super-fast index of the internet. They’re excellent at pulling together existing information, summarizing sources, and offering quick overviews. Just don’t ask them to generate truth—treat them like a knowledgeable assistant who occasionally cites fiction as fact(2).
This metaphor sets realistic expectations: fast knowledge, filtered through human judgment—your judgement.
LLM’s mostly work when using this metaphor, especially with the use of RAG and manual checking of references.
✅ The Robot — AI as a process automator
Seen as a robot, LLMs shine in structured, repeatable tasks: formatting, rewriting, extracting, listing.
They’re brilliant at following instructions, not making decisions. The magic isn’t in intelligence—it’s in speed and scale. You define the process, they carry it out.
Think mechanical efficiency, not creative spark. This perspective mostly works, but the value is mostly “cost reduction”.
💡 The Brain Boost — AI as a thought partner
This metaphor reframes LLMs as cognitive collaborators.
You bring the judgment, they bring the patterns. Ask them to challenge your ideas, explore alternatives, or help you think aloud. This works best when you're not outsourcing thinking—but complementing and enriching it.
It’s less “do this for me,” more “think with me.”
This metaphor mostly works, because of The Library pattern above. And the limitations are reasonably easy to overcome.
🧠 The Illustrator — AI as a concept visualizer
At their best, LLMs help make the abstract concrete.
They turn fuzzy thoughts into analogies, diagrams, examples—things you can see and share. Whether you're teaching, planning, or just clarifying your own thinking, this metaphor reveals a hidden superpower: translation.
Use it to make ideas clearer—not just to yourself, but for others.
Using this metaphor helps us fine-tune, and improve our thinking by test running our ideas through a visualization filter.
Conclusion: Metaphors Are Maps—Choose Yours Wisely
When we use metaphors, we are not changing the technology or the tools. But when we change how we see the technology and tools, everything changes.
Metaphors aren’t just ways of understanding AI—they’re perspectives that shape how we think about, use it, what we expect from it, and what role we allow it to play in our lives. A mismatched metaphor can leave us frustrated or misled. A well-chosen one can unlock surprising clarity, creativity, and momentum.
There’s no single “correct” way to frame AI. That’s the point. Different situations call for different lenses. Sometimes you need a librarian. Sometimes a robot. Sometimes a second brain. The most empowered users aren’t just good at prompting—they’re good at perspective-shifting. Using metaphors for their own benefit.
So if you’ve felt behind, overwhelmed, or unsure where you fit in the AI wave—you’re not broken. You’re just early. Start with one clear metaphor that matches your needs. Use it. Learn from it. Then try another.
You don’t need to master everything. You just need a mental map that helps you move forward. And metaphors are excellent mapmakers.
⭐️⭐️⭐️⭐️⭐️ This post is sponsored by The Scrum Master Toolbox Podcast, where you can find the most advanced ideas on Software, Product Development and Agile! Subscribe in your app of choice, and get inspired EVERY DAY.
References
(1) Check out a previous article where I listed some catastrophic uses of AI to “replace” other human collaborators:
(2) “Chicago newspaper prints a summer reading list. The problem? The books don't exist” https://www.cbc.ca/news/world/chicago-sun-times-ai-book-list-1.7539016
Well written, thanks. I think that acknowledging the fear from AI and curiosity are the key to unlock AI. It is not about tools at all IMO