The AI Hype Survival Guide for Software People: What’s Actually Changing (and What Isn’t)
A practical guide for software folks navigating the AI hype—what’s actually changing at work, what isn’t, and why it matters. With some practical takeaways.
There’s a moment—right before a big shift—when everyone’s talking, but few people really know what’s happening. We’re in that moment with AI.
Your feed is probably overflowing with declarations like:
🚨 AI is replacing your job!
🌟 AI is the greatest productivity boost since the mouse!
👀 AI is overhyped and undercooked!
So… which is it? I’d say that, in all likelihood, the future is already here—it’s just not evenly distributed. Thank you, William Gibson.
What’s actually happening—right now, in real teams and real businesses—is far more nuanced, confusing, and insightful than the headlines suggest. And that’s why I decided to try my hand at writing something that tries to be a flashlight-in-the-fog kind of piece. I’ll look at real-world case studies across industries to answer a deceptively simple question: What jobs is AI really taking over—and how should software leaders respond?
Some of these cases are surprising. Others are… let’s say “educational.” 🫣
You’ll see how AI is transforming marketing teams in China, quietly gutting freelance platforms, assisting developers, and yes—even getting lawyers in trouble.
You’ll see AI transforming marketing departments in China, quietly gutting freelance platforms, assisting devs, and yes—getting lawyers sanctioned in federal court for citing cases that don’t exist.
Here’s the paradox at the heart of all this: today’s AI tools can sound authoritative while being completely, spectacularly wrong. They don’t know what they don’t know. And if we treat them as replacements instead of tools, we risk outsourcing not just tasks—but judgment.
My promise is to keep it practical. Whether you’re scaling a dev team, rethinking your workflows, or just trying to sleep at night without imagining GPT-6 taking over your standups, you’ll walk away with useful insights.
I’m not here to fearmonger or evangelize. I’m here to study—with curiosity—the terrain, flashlight in one hand, thermos of strong coffee in the other.
Get comfortable, get a cup of hot coffee, and let’s begin.
The AI That Actually Took the Job
While the headlines love to speculate about what might happen, a number of companies have already pulled the trigger on real, large-scale AI replacements. In these cases, AI didn’t assist or augment—it outright took over.
NOTE: some of these cases might “inflated” to make a good marketing story—I can’t know for sure—however, these are good stories to be aware of, as they inform our decisions.
Take Dukaan, a Bangalore-based e-commerce startup. In 2023, the company replaced 90% of its customer support staff with an in-house AI chatbot. The CEO called it a “tough but necessary” decision, citing an 85% reduction in support costs and drastically improved response times. This wasn’t a pilot or partial rollout—it was a clean sweep. Read the case.
Meanwhile in Sweden, fintech giant Klarna deployed AI agents (built on OpenAI's models) to handle two-thirds of all customer inquiries. In just one month, the AI resolved issues in under two minutes—down from a previous average of eleven. The workload it absorbed was equivalent to about 700 full-time agents. Klarna didn’t technically lay anyone off—many support roles were outsourced—but the company now simply doesn’t need to hire them. Read the case.
The trend isn’t limited to customer service. Chinese advertising firm BlueFocus took an even bolder leap, publicly announcing it had ended all contracts with its copywriters and graphic designers, replacing them entirely with generative AI tools. The decision followed the company’s adoption of Microsoft Azure’s OpenAI services and Baidu’s ERNIE Bot. In essence, BlueFocus pivoted its creative operations almost overnight. Read the case. (NOTE: Although this story is likely to be inflated for marketing impact, there’s a trend we need to pay attention to.)
Duolingo made similar moves in education. After integrating AI translation models into its platform, the company let go of roughly 10% of its contractor translators. The decision was explicitly attributed to the efficiency gains from machine translation. The fact that these workers were contractors meant the change flew under the radar—but it marked a clear moment of functional replacement. Read the case.
Even media is not immune. Microsoft’s MSN portal began using AI to generate and rewrite news stories as early as 2020, letting go of dozens of human editors in the process. The AI system selects and edits stories for MSN’s homepage—work previously done by journalists. While the cost savings were immediate, early missteps (like poorly chosen headlines and inappropriate content pairings) revealed the brittleness of relying too heavily on automation in editorial work. Read the case. (NOTE: the MSN news case is an example of where AI will replace jobs, yet perform poorly and with possible liability for the companies using it)
These are real-life, already-happening cases of AI—or LLM’s, more precisely—being used to replace jobs that were previously staffed by workers. And they follow a clear pattern:
If the task is high-volume, repetitive, and doesn't demand perfect accuracy or deep judgment, AI has already proven itself cheaper, faster, and—in some cases—good enough.
There’s a lesson for us in the software industry! What are the repetitive, high-volume, accuracy-light tasks in your organization?
Those are the logical entry points for AI. And ironically, they’re often not seen as strategic enough to attract executive attention. Which means that while these tasks are the most vulnerable to automation, they may also be the least examined.
What are the repetitive, high-volume, accuracy-light tasks in your organization?
The practical application of AI/LLM technology is likely to be a quiet shift, and it may already be further along than we think.
⭐️⭐️⭐️⭐️⭐️ This post is sponsored by The Scrum Master Toolbox Podcast, where you can find the most advanced ideas on Software, Product Development and Agile! Subscribe in your app of choice, and get inspired EVERY DAY.

Freelancers, Farewells, and Market Shifts
If you want to see AI’s economic impact in real-time, look no further than the freelance marketplaces.
Platforms like Upwork and Fiverr are built on the long tail of human creativity and labor—people doing writing, translation, customer service, design. The exact kind of work generative AI now performs in seconds. And the market has noticed.
Since the launch of popular AI technologies, writing gigs on Upwork have dropped by 33%, and translation gigs are down 19%. Customer support jobs? Also down—by 16%. That’s not a prediction. That’s a shift in demand already in progress. Read the data.
At first glance, these stats might feel like background noise—something for freelancers to worry about, not software leaders. But these platforms are often canaries in the coal mine. They reflect where organizations are already choosing to automate. Not in big strategic moves, but in dozens of quiet micro-decisions:
“Let’s not hire a copywriter this time.”
“Let’s try GPT for this product description.”
“Let’s use DeepL instead of that localization team.”
And it’s not just about job postings vanishing. Fiverr, another major player in the gig economy, is now flooded with sellers offering AI-generated art and content for a fraction of what human creatives used to charge. Some are transparent about it. Others less so. (And yes, some of that art still has the telltale five-fingered hands and melting eyes. Progress is not always elegant.)
Here’s the twist: a lot of clients are still paying $50 or $100 for what is, in essence, a well-crafted AI prompt and a Midjourney render. That’s not sustainable. As AI literacy grows, we’ll likely see these services commoditized—or vanish.
But in the meantime, we’re watching an informal redefinition of “entry-level knowledge work.”
What once served as a launchpad into creative or analytical careers—writing SEO content, translating blog posts, compiling market research—is now being rerouted through algorithms.
An interesting question arises from this data: Where will future specialists come from if the stepping stones are automated out of existence?
And from a leadership perspective, it raises a different question: What low-risk, low-margin tasks in your organization are quietly being reimagined—or ignored—because someone assumes “AI can probably do that now”?
If you're not looking, someone else might be. And they may not be thinking about long-term quality, team development, or ethical tradeoffs. Just the short-term win.
The bottom line:
AI is creeping in not through headline-grabbing revolutions, but through spreadsheet rows, procurement shortcuts, and small tasks no one thinks to defend.
Watch closely. These early displacements are whispers of broader changes coming for how we define work—and how we build teams that last.
When AI Still Fails Spectacularly (And Expensively)
For all the impressive headlines, AI still has a talent for getting things wrong—with confidence.
Let’s start in a Manhattan courtroom. In 2023, two New York lawyers filed a motion that cited six legal cases… that didn’t exist. They had used ChatGPT to draft parts of the brief and trusted the citations it generated without verifying them. The judge was not impressed. Sanctions followed, and the incident became a case study in what happens when automation meets accountability. Read the story (Ad-blockers not welcome in this site).
Not to be outdone, a legal team working with Anthropic later attempted to blame an AI model—Claude—for faulty output in a court filing. The idea that “the AI did it” might sound convenient, but judges (and clients) don’t particularly care. As it turns out, responsibility doesn’t vanish just because your assistant runs on transformers. Read the coverage.
Government systems haven’t fared much better. When New York City launched an AI chatbot called MyCity to help small business owners navigate local regulations, reporters discovered it gave wildly inaccurate—and in some cases, illegal—advice. It told employers they could take a cut of workers’ tips. It misquoted minimum wage laws. It ignored requirements for shift-change notifications. In other words, it was wrong in ways that could get someone sued. Read the article (Ad-blockers not welcome in this site).
And then there’s the media. In 2023, CNET quietly published dozens of financial explainer articles written by an in-house AI tool. The problem? More than half had to be corrected after publication—some due to factual errors, others due to plagiarism concerns. Read the breakdown. The public backlash was swift, and the experiment was paused.
These aren’t edge cases. They’re reminders of a core truth: AI isn’t a neutral engine of truth—it’s a fluent bullshitter with a high tolerance for risk. Or, more accurately “Risk? What’s risk?”
And here’s the kicker: it doesn’t know when it’s wrong. It doesn’t pause before issuing advice with legal implications. It doesn’t raise a hand and say, “Actually, I made that case up.” It just outputs, confidently and syntactically perfect, whatever seems statistically likely to fit the prompt.
That’s fine for drafting email subject lines. It’s not fine for court briefs, regulatory advice, or financial journalism.
It’s all fun and games—until the AI gets you sued or fired!
The takeaway?
AI may be fast, cheap, and occasionally brilliant—but it is not (yet?) trustworthy where stakes are high and errors have real consequences.
In those spaces, it’s a tool—not a replacement. And tools need careful, accountable supervision.
Lessons for Software Leaders
By now, the patterns are probably starting to feel familiar.
AI is incredibly good at doing just enough to displace work that used to take time, energy, and human effort—especially when that work is repetitive, high-volume, or doesn’t demand deep contextual reasoning. That’s why it’s showing up first in customer support queues, low-stakes content creation, and transactional translation work.
But what does that mean if you’re leading a software organization?
It means this: you’re not next. You’re already in it.
If your developers are reaching for GitHub Copilot or Claude Code before searching Stack Overflow, that’s adoption. If your product team is using LLMs to summarize customer feedback or write release notes, that’s a shift. If someone on your team ran an internal AI pilot—and didn’t mention it to anyone—that’s the future quietly slipping in through the side door.
And like with the freelance platforms, the most affected roles won’t be the ones we spotlight on org charts. It’s not your principal engineer or your lead PM who’s first in line—it’s your QA tester writing the same regression scripts week after week. It’s your support team triaging tickets that could be routed, categorized, and maybe even partially answered with AI. It’s your junior dev writing boilerplate they never wanted to write in the first place.
But—and this matters—the goal isn’t to replace. It’s to rebalance.
Done well, AI can give your team more time for meaningful work:
Devs focus on architecture and testing strategy instead of import statement wrangling, and essential, but uninteresting boilerplate code.
PMs synthesize insights instead of sifting through raw transcripts. Which—hopefully—they will double-check 😉.
Designers iterate faster, instead of worrying about pixel-perfect mock-ups.
But AI will only have an impact if you’re intentional. If you plan for hybrid workflows instead of hoping the tools will “just help.” If you ask hard questions about quality, oversight, and accountability—before your devs are pushing AI-generated code into prod.
And if you’re not asking those questions yet? That’s okay. This is early terrain. We’re all mapping it in real time, and I’m here to help.
So ask yourself:
Where could AI remove friction from your team’s day-to-day?
Where might it quietly lower the bar without anyone noticing?
And who on your team has already started experimenting—without permission, but with results?
This isn’t about jumping on trends. It’s about choosing how your team learns.
Because if you don’t explore now, you’ll be reacting later.
Concrete Steps You Can Take Today to Explore AI in Your Team — Pragmatic Innovation
By now your coffe is either over, or cold! I mean, we’ve covered a lot! From chatbots replacing support roles to developers quietly skipping Stack Overflow in favor of AI copilots. But if there’s one clear takeaway, it’s this:
We’re not in the age of AI mastery. We’re in the age of AI experimentation.
And if you lead a team, your job isn’t to have all the answers. It’s to create the conditions where discovery is possible—before change catches you by surprise.
Here are five concrete steps you can take in the next few months to explore how AI can actually help (not threaten) your team.
1. Audit for Low-Risk, High-Repetition Tasks
✨ Inspired by cases from Dukaan, Klarna, Duolingo
Look at your workflows. Where is your team doing the same thing over and over again?
Think ticket triage, test case creation, data formatting, release notes, translation cleanup, documentation, code reviews, splitting user stories, mapping user stories to personas, … The list is long, get together with the team and brainstorm a few more.
Make a list. Pick one—you’ll get to the others later, just pick one. Try automating it.
It doesn’t have to be perfect. You’re not looking for a finished product—you’re starting a learning loop. Be agile about it! 😉
2. Empower Small-Scale, Safe-to-Fail Pilots
✨ Inspired by cases like Freelancers & dev teams already using Copilot, GPT, Midjourney
Chances are someone on your team is already experimenting—maybe without permission, maybe on weekends. Great.
Give them 10% time. Ask them to host a lunch-and-learn session on what worked, what didn’t, and what surprised them. If they can include a demo, give them an extra pat in the back!
Build a “What We’re Learning About AI” doc that grows over time.
Make the invisible learning visible.
3. Create a ‘Red Flag’ List 🚩
✨ Inspired by cases like NYC’s chatbot, hallucinating lawyers, and CNET’s corrections
Not everything should be automated. Start capturing where you don’t want AI touching anything—yet.
Examples:
Legal compliance
Security-sensitive code (security is an area that I believe we will discover some significant problems…)
High-stakes customer messaging
Anything where “sounding confident” isn’t the same as “being right”
A red flag list gives your team psychological safety to explore without guessing at the boundaries.
4. Rethink Entry-Level Work
✨ Inspired by cases like Upwork, Fiverr, BlueFocus
If AI is eating the bottom of the ladder, what does that mean for how you train and grow junior talent?
Ask:
Are we creating pathways for humans to learn higher-value work?
Are we offloading too much too early to AI, and cutting off future team capacity?
Should we define new “AI-collaboration” roles, not just replacements?
Being thoughtful here isn’t just ethical. It’s strategic. Make AI a deliberate part of your strategy, and that means considering it in your upskilling strategy!
5. Make AI Learning a Team Sport
✨ Inspired by: You. Reading this.
Don’t treat AI as a leadership-only concern.
Give everyone on your team a role in discovering what’s possible.
Start a shared Slack thread: “Cool things AI helped me with this week”
Rotate AI “exploration buddies” across functions—PMs, devs, designers, support
Debrief once a month: What’s working? What’s risky? What’s exciting?
Involve your team in building the map. They’re closer to the edges than you are.
Bottom line? You don’t need to overhaul your org. You just need to start learning—deliberately, together, and fast. With an Agile spirit! ⭐️
Because the companies that treat this as an experimentation window…
will be the ones best prepared when the terrain settles.
And One More Thing: This Isn’t Just About AI. It’s About Us.
As you begin to experiment, remember: this isn’t really about the tools. It’s about what the tools free us to do.
Every time we automate a repetitive task, we’re opening space—for creativity, for strategy, for connection, for craft. We’re not just streamlining—we’re redefining what good work can feel like.
As we learn, we discover new tasks, new spaces where we can express our creativity, our discipline, our judgment.
New ways to collaborate. To focus. To enjoy our work lives.
We could choose to focus on what’s being taken away. But we can also decide to focus on what’s possible now—after AI is here.
And the best part? We get to figure that out together.
Let’s be like Alice, curious, grounded in our community and continuously learing! Let’s be a little more like Pragmatic Innovators!