profile

Go from overwhelmed to in-demand AI Strategist (without hours of research)

3 Reasons Why AI Agents Will Struggle to Deliver on Their $70B Promise


Hi there,

In our big picture essay, we'll discuss the reasons why I'm bearish on how long it will take for AI agents to reach mainstream adoption.

We'll no doubt see more agentic (and subagentic) systems over the next few years, but companies trying to adopt agents without addressing the issues below will face real headwinds. Would love to hear your thoughts!

I also highlight several new OpenAI features and releases in our news briefing, including one of Sam Altman's fears about AI. AND Jonathan shares a helpful strategy to get AI to stop agreeing with you all. the. time. Enjoy!

--Neil

BIG PICTURE

3 Reasons Why AI Agents Will Struggle to Deliver on Their $70B Promise

"Cut your workforce in half while doubling productivity."

You've probably heard some version of this claim in exhibit halls and sales demos as vendors race to position their AI agents as the ultimate business solution.

It's why analysts project the AI agent market will 10x over the next five years from $6 billion to $50-70 billion.

But hold on. Before you redirect your entire AI budget toward agents, there's something you should know: these autonomous systems aren't the silver bullet many vendors promise.

So where exactly should leaders place their bets?

Let's cut through the hype and examine three areas where AI agents are making real progress – and three stubborn problems holding them back from delivering as vendors would like to think.

AI Agents Will Continue to Gain Market Share Because...

  • Hybrid AI-Human teams are powerful

Fintech provider, i2c, offers an interesting story of how AI agents crunch “hundreds of behavioral signals in real time” to assess fraud risk. When a clear decision isn’t possible, the system escalates to a human reviewer for their judgment.

Here, AI processes 40% of the fraud claims with dynamic scoring that evolves to detect new threats, while incorporating a human element. This combined team plus AI approach is already showing real-world benefits in both processing time and quality.

  • Competitive advantage will drive widespread adoption

While McKinsey predicts that 92% of companies will increase their AI investments over the next three years, the race toward agent adoption specifically is accelerating. A recent survey by Aptitude Research, found that nearly half of companies already view agents as critical efficiency drivers in hiring processes alone.

As more companies find budget, adoption, and success cases, companies not deploying and trying AI agents will be seen as less competitive and outdated.

  • High ease of use for non-technical workers

AI thought leader and Wharton professor, Ethan Mollick, recently shared his surprise at what ‘mild agentic’ systems like o3 can accomplish.

Easy access to AI agents through consumer-grade applications such as ChatGPT and workflow builders such as n8n and Make will increase adoption among a majority of knowledge workers.

But, but, but

While it’s a safe bet that we’ll see increased adoption, usage, and utility over the next several years for agentic AI, it won’t be a smooth ride.

Despite the clear potential, three fundamental challenges are creating a much steeper adoption curve than most vendors acknowledge:

1/ Agents don’t have the mental flexibility needed for many tasks.

A revealing case study comes from Carnegie Mellon's experiment with "TheAgentCompany," where researchers attempted to run an entire business using only AI agents working as software engineers, project managers, finance professionals, HR staff, and even a CTO. The results? Even the best models successfully completed just 24% of their assigned tasks.

Salesforce researchers looking at the performance of the much-hyped Agentforce platform also reported sobering results:

  • Just 58% accuracy on simple, single-function CRM tasks
  • Only 35% accuracy when handling multi-step workflows

Why such poor performance? Carnegie Mellon researchers chalked it up to AI agents lacking real-world common sense, having low context, and demonstrating poor social and communication skills.

This lack of “real-world common sense” comes from the AI’s inability to create underlying mental models of how things actually work. Although AI's pattern recognition is impressive (and is constantly improving), this ‘lack of understanding’ means AI systems don’t have the mental flexibility for all scenarios.

For example, an AI might correctly schedule meetings for weeks but suddenly suggest a 3 AM time slot when asked to accommodate an international participant - missing the common-sense understanding that people need to sleep.

This gap between pattern matching and true understanding explains why agents excel at routine tasks but struggle with the contextual judgment that defines most knowledge work.

2/ Agent rollouts often ignore the drivers of human adoption

After studying over 800 tasks across 104 occupations, Stanford researchers found that successful AI adoption depends on aligning with human preferences: ranging from wanting AI to work completely independently to preferring continuous human oversight.

Their study revealed two critical insights:

  • The most commonly preferred collaboration model (45% of occupations) was ‘equal partnership,’ where humans and AI agents work together rather than AI acting entirely on its own.
  • Investments fail when they target areas that AI could highly automate but where workers strongly prefer human control (what researchers called 'red light' zones).

When companies ignore these preferences and force high-autonomy agents into workflows where humans want more control, the technology often gets gradually sidelined rather than adopted.

3/ Unexpected and unknown interactions and behaviors.

Agentic AI systems show a lot of promise for automating burdensome, repetitive work, but unknown behaviors and interactions of AI agents could also chill their adoption curve.

  • OpenAI's o3 model—the same "mild agentic" system Ethan Mollick praised—was found hallucinating at more than double the rate of its predecessor, according to Axios.
  • After a Claude agent reportedly blackmailed an engineer, AI expert Evan Armstrong warned, "The more agentic these models become, the harder they are to control."
  • Perhaps most concerning, researchers discovered that multi-agent systems can spontaneously "collude" with one another, creating emergent behaviors no developer intended.

As these examples show, there's a real catch-22 here. The same autonomy that makes AI agents valuable is precisely what makes them risky. Companies need to ask themselves: "Is the productivity boost worth the potential headaches when these systems go off-script?"

What's the smart approach to implementing AI agents?

  • Embrace Mollick's "rules of AI" — Remember what Ethan Mollick frequently reminds us: today's AI is the worst we'll ever have. The technology is evolving rapidly, so maintain a flexible mindset rather than making rigid, long-term implementation plans that might be obsolete in six months.
  • Set realistic expectations — Those slick demos from vendors are often heavily curated. In real-world settings, AI agents will stumble more often than you expect. Plan for 30-50% success rates on complex tasks rather than the 90%+ that sales pitches promise, especially in the short term.
  • Start with human-AI collaborationMicrosoft’s Magentic UI is a good example of building AI systems with a "human as first agent" approach. Begin with agents that augment human capabilities rather than replace them outright. The most successful implementations typically have clear human oversight and intervention points.

Gartner recently predicted that companies will cancel over 40% of agentic AI projects by 2027. Mainly, they argue, because companies will over-deploy agentic AI systems.

I agree. The most successful organizations won't be those racing to deploy agents everywhere.

But those who measure and develop human-AI readiness and thoughtfully create systems where human strengths like mental flexibility and emotional intelligence complement AI's analytical power.


AI @ WORK BRIEFING

AI TRAINING

AI models will often try to complete your thoughts rather than challenging them constructively.

Here are a few ways to trick the model out of sycophancy and into being a thought partner:

1. Add “Walk me through your reasoning” to your prompt. ⠀

Instead of just getting an answer, you see how it arrived there. Game-changer for complex decisions. Example: “Should I quit my job? Walk me through your reasoning.”

2. Use “What’s the contrarian view here?”

Instantly breaks out of echo chambers. It’ll argue against its own first response.

3. Say “Assume I know nothing.”

Even for topics you understand. “Explain cryptocurrency, assuming I know nothing,” gets you foundations that reveal gaps in your knowledge.

4. Ask “What questions should I be asking instead?”

This one’s sneaky good. Often, the question you asked isn’t the right one, and this helps you find a better one.

5. Use “Give me the version for beginners, then for experts.”

Two explanations in one shot. The beginner version clarifies concepts, the expert version gives you depth to sound smart.

6. End with “What would make this backfire?”

This surfaces failure modes before you commit. The weird part is that these work because they force AI out of “helpful assistant” mode into “thinking partner” mode.

It stops trying to please you and starts trying to solve with you.

Practice: Have any conversations recently that might benefit from an opposing view? Go back and follow up with "What is wrong with this? Suggest improvements." or "What's the contrarian view here?"

(R/T EQ4C on r/ChatGPTPromptGenuis)

BEFORE YOU GO

The Last 2025 AI for HR Mastermind Cohort is Now Enrolling!

Visit AI for HR Mastermind to apply for our 5-week training program and networking community where HR pros learn together and get personalized coaching to build custom AI assistants. Next cohort starts October 1st. Limited seats available!

Interested in AI upskilling for your People or HR team?

We offer 2-4 hour AI strategy and builder lab workshops, plus private mastermind cohorts for teams. Email us at info@workplacelabs.io for more information!

Is your team facing AI adoption and use problems?

Book a free, 15-minute AI Clarity Call to talk about AI adoption and use issues your team is facing.

6065 Roswell Road, #450, Atlanta, GA 30328-4011
Unsubscribe · Preferences

Go from overwhelmed to in-demand AI Strategist (without hours of research)

You'll get deep dive essays, curated headlines, and plug-and-play methods for working with AI.

Share this page