Note: This is part 1 of a series on AI implementation.

The weird fact about AI adoption in real life:

Some individuals are achieving remarkable results while organizations struggle to replicate their success. This isn’t just about different skill levels. It points to something more fundamental in regards to how AI competency actually develops.

The current consulting narrative often promises wins through universal AI adoption. Immediate impact to the bottom line is a best-seller.

The recipe seems simple:

  1. Distribute AI access
  2. Provide some examples of use cases
  3. Run training programs

Off you go.

The assumption? Success is primarily about exposure to the tools, and a dash of technical training.

This assumption is wrong.

Small Change, Big Difference

Let me share a recent example from my own work. A client in holistic healthcare was hitting walls with AI-assisted content creation. The models kept blocking attempts to explore alternatives to conventional treatment approaches. Certain terms triggered safety filters.

This can be frustrating. Many users would give up here.

However, we were able to solve the issue. After the client shared the related materials and prompts with me, I was able to get the AI system to work with us, without any reservations. This is only in part about “being clever.” The edge lies in a developed intuition for how these systems work.

What I did? I reframed the discussion, removed or avoided the specific terms and made the topic: traditional versus holistic approaches, conventional versus integrative healing paradigms.

The AI immediately provided a very nuanced response, and content we were able to work with. In fact, the output even expanded on our original, and narrow, approach, making the message more inclusive.

The change was rather small. The difference big, going from “this ain’t working” to “this is acutally pretty great.”

This isn’t genius. It’s experience translated into intuition.

New Intelligence

These AI systems represent something fundamentally new: not just software to be operated, but another form of intelligence to be engaged with. They don’t follow the predictable patterns of traditional software. They require us to interact with somewhat alien minds. Moreover, we do that in a way that often resembles a very familiar UI/UX, the chat interface or even voice. Confusingly though, there is no human on the other side, but a knowledgable and (in its own way) intelligent system that is eager to help.

For this to work, user need to develop a feel for how these systems think and respond - what we might call ’system intuition.’ This isn’t about memorizing prompts or following playbooks. It’s about understanding at an intuitive level how these systems process and respond to information.

This kind of system intuition enables you to:

  • Understand how these systems think
  • Recognize their quirks and capabilities
  • Develop intuition about effective interaction

It’s not procedural knowledge — it’s pattern recognition that develops through experience and reflection. Actually, you also need to project forward because we don’t know the limits of the tech.

Some might argue that starting with use cases could eventually lead to this pattern recognition and intuition. I’m skeptical. In my experience, true fluency emerges from open-ended exploration and problem-solving, not from following predetermined paths.

Collective Intuition Is Innovation

Here’s the real challenge for organizations: You’re not just rolling out and implementing tools. You’re developing a new form of individual and collective intuition that can effectively leverage this new kind of intelligence.

 Organizations are treating AI adoption as a training problem when it’s actually an intelligence development challenge.

The solution isn’t more training programs. It’s creating environments where genuine understanding can emerge through guided experimentation and reflection.

If you’re measuring daily AI tool use, fine. It is something that you can easily measure. I’d say, you are using a proxy variable.

What most organization avoid, and many consultants are not equipped or willing to even explore is:

  1. Uncovering and formulating the value creation process of the business.
  2. Connecting the technology’s capabilities and limitations (risk) to that value creation process.

This would enable you to measure what you need to measure. And it would point to where you want to focus your efforts of implementation.

Building Your Circle of Excellence

Smart organizations aren’t rushing to implement AI everywhere. They’re building circles of excellence—internal AI labs where expertise and intuition can develop organically.

These hubs serve as crucibles of experimentation and learning. They’re where individual intuition transforms into collective capability. Where real expertise grows through practical problem-solving, not theoretical training.

The organizations that get this right are creating spaces where people can develop genuine fluency with these new forms of intelligence. You need to develop this with the people you have. You can’t hire for it, as it used to be the case. That’s a major, but often overlooked new reality.

So this endeavor is about growing islands of excellence that ripple outward, transforming how your entire organization thinks about and works with AI.

You can’t train your way to AI excellence.

You have to grow it.


This is slightly ironic:

While the technology uses the terminology “training the model”, it is actually much more like “growing”, as in growing things in your garden.

You are not really in full control of the process. You provide the ingredients and the environmental conditions. You see what you get and sample the fruit when it’s grown.