Everyone has AI. Almost nobody has a plan.

Tools don’t choose the path. You do.

Everyone has AI. Almost nobody has a plan. — SeventyOne

88% of organisations are already using AI in at least one business function. Only 1 in 3 have begun to scale it and see real impact. The gap is not about tools, and it is not about ambition. It is about connecting AI to the outcomes that actually matter to the business.


There is a particular kind of activity that passes for AI strategy right now. Tools get evaluated. Pilots get launched. Dashboards get built. Months pass. And then someone asks the uncomfortable question: what problem are we actually solving?

The answer, too often, is silence.

This is not a technology failure. It is a direction failure. And until organisations address it honestly, most AI investment will continue to generate activity rather than impact.

"The faster your team can build, the more important it becomes that they are building the right things. One wrong turn at high speed can send you far off course."

Reforge — Moving to Higher Ground: Product Management in the Age of AI, 2025

Speed is not the problem. Misdirection is. AI deployed without clarity of purpose does not fix misdirection, it amplifies it.

88%
of organisations are using AI in at least one business function
McKinsey, State of AI 2025
1 in 3
have actually begun to scale it and see real impact
McKinsey, State of AI 2025

What is actually happening

When product teams adopt AI, three patterns appear consistently. None of them are about technology.

Customer insights are scattered and hard to access

Insights sit isolated across product teams, sales, and customer service. Product managers spend hours hunting for a picture that already exists somewhere in the organisation, just in five different places.

AI is used to write specs, not to understand customers

Most teams use AI for documentation and content generation. Very few use it to synthesise discovery data, surface patterns across silos, or validate assumptions before committing to a direction.

Speed increases but direction stays unclear

AI accelerates execution dramatically. But if learning has not moved earlier in the process, teams simply arrive at the wrong destination faster. More output, same underlying uncertainty.


Start with the business outcome, not the use case

Before any of the other questions, there is one that matters most: what business goal is this AI initiative actually serving?

Not a capability goal. Not a process goal. A business outcome: the kind that shows up in a strategy review, that connects to where the organisation is trying to go. Increase conversion in this segment. Reduce time-to-resolution for this customer group. Improve the quality of decisions in this part of the pipeline. When that connection is clear, everything else becomes easier to evaluate. Tool choices, team priorities, success metrics, governance decisions, they all have an anchor.

When it is missing, AI initiatives drift. Teams optimise for the wrong things, measure the wrong outputs, and six months later struggle to explain what value was actually created. The investment was real. The outcome just never got defined.

An AI initiative without a business outcome is not a strategy. It is a bet without a hypothesis.

This does not mean every experiment needs a fully articulated business case before it starts. It means the business question should come before the tool selection. What are we trying to achieve? Who does this serve? How will we know it worked? Those three questions, answered honestly, will tell you more about whether an AI initiative is worth running than any vendor demo will.


01 — Frame it as a product problem,
not a technology problem

The first mistake is letting the AI conversation start with tools. Which platform? Which model? Which vendor? These are valid questions. Eventually. But they are the wrong starting point.

The right question is: which customer problem do we need to understand faster, or solve better? That reframe immediately changes what you are evaluating. It moves the conversation from capability to purpose. And it gives you a concrete way to measure whether the deployment is working: not in usage metrics, but in customer outcomes.

The question is not which AI tool to use. It is which customer problem AI can help you understand faster, or solve better.

A team that starts with the customer problem will make better tool decisions. A team that starts with the tool will retrofit the problem, and often discover that the problem they used to justify it was never the one that actually needed solving.


02 — Map where AI accelerates learning,
not just delivery

Most conversations about AI productivity focus on delivery: write code faster, generate content faster, summarise faster. These gains are real. But they are also the least strategically interesting ones.

The more durable leverage is in learning: the kind that has historically been slow, expensive, or organisationally difficult. Synthesising what customers are telling you across channels. Surfacing patterns buried in support tickets and NPS verbatims. Connecting the insight that sits in sales to the decisions being made in product.

A product team might receive thousands of support contacts a month, hundreds of survey responses, and a continuous stream of sales call recordings. Each source carries signal. Together, synthesised well, they represent something close to a real-time picture of where the product is breaking down and why. Before AI, making sense of this required analysts, time, and organisational will. Now the synthesis can happen in hours.

If learning has not moved earlier in the process, you simply arrive at the wrong destination faster.


But what about just running with it?

A fair challenge: is all this framing and strategy talk just slowing things down? Should teams not simply start experimenting and learn as they go?

Yes, and no. Moving quickly is not the problem. Moving quickly without any hypothesis is.

The good news is that you do not need a perfect strategy to start. What you need is a lightweight structure around the experimentation: a clear question you are trying to answer, a prototype or proof of concept scoped tightly enough to test it, and a way to know whether it worked. That is not bureaucracy. That is basic scientific thinking applied to product development.

A team that runs ten quick prototypes with clear hypotheses will learn faster, and waste less, than a team that deploys a fully featured AI tool without knowing what problem it is solving. Speed and rigour are not opposites. The fastest path to real impact is structured experimentation, not unstructured enthusiasm.

Go deeper

We have written separately about the discipline of learning before committing: how to validate assumptions early, how to move evidence into decisions before resources are locked, and why the timing of learning matters as much as the quality of it.

Learning before commitment: the hidden constraint on growth →


03 — Have the mandate conversation

This is the part most AI frameworks skip entirely. They focus on tools, processes, and prompting strategies. Very few address the organisational condition that determines whether any of it produces lasting change.

Mandate is not a soft concept. It is the precise alignment between what you are accountable for and what you actually own. When those two things are out of sync: when a team is responsible for customer experience but does not control the touchpoints that shape it, or when a product leader is expected to drive growth but cannot commission the research that would show where to focus, AI simply adds capability to a broken system. It does not fix the system.

The mandate conversation is most productive when it is framed as a design question rather than a grievance. Not "we do not have enough authority" but "here is what our operating model says we own, and here is what we are actually accountable for. Which model do we want to build toward?"

What you are accountable for and what you own must match. Bring that gap to leadership as a design question, not a complaint.

AI makes this conversation easier to have. When you can show that insights are being synthesised but are not reaching the people who could act on them, the structural gap becomes concrete rather than abstract. That is a far more useful conversation than a general request for more empowerment.


04 — Sort out governance before the rollout

There is a fourth failure mode that rarely makes it into the strategy presentations, but that anyone who has tried to actually deploy AI inside a large organisation will recognise immediately.

The tool exists. The team is ready, often enthusiastic. And then the questions start. Is this tool approved? Who owns the procurement? Is there budget? What are people actually allowed to do with it? Who do you even ask?

What follows is a familiar sequence: emails that go unanswered, meetings that produce more questions than answers, approvals that require sign-off from people who have never heard of the tool, and security reviews that stretch across weeks. By the time the licence finally arrives, the moment has passed. The team has moved on or lost momentum, or quietly started using something else without telling anyone.

A pattern we recognise

A development team, eager to start using an AI coding assistant, asks their product manager to help get access. A reasonable request. The PM spends the better part of a week navigating procurement, IT security, legal, and finance, chasing approvals across teams and systems. The licence eventually arrives. But clear guidance on what the team is actually allowed to do with it never does.

The tools are there. The intent is there. The strategy is not.

This is not an edge case. It is one of the most common ways AI initiatives stall, not in the boardroom, but in the gap between decision and permission, between enthusiasm and infrastructure.

Governance is not the enemy of speed. The absence of governance is. When there is no clear policy on which tools are approved, who can use them, what data they can touch, and what counts as acceptable use, every team either waits indefinitely or improvises independently. Neither produces the outcomes the organisation is hoping for.

A functioning AI strategy answers these questions before teams start asking them. It defines the boundaries clearly enough that people can move within them confidently. No week of emails every time someone wants to try something new.

You can have a thousand tools and still be completely blocked. Governance is not a blocker. The absence of it is.


00 Start with the business outcome. What goal does this serve? Who does it help? How will you know it worked? Answer those before you pick a tool.
01 Frame it as a product problem, not a technology problem. The question is not which AI tool to use. It is which customer problem AI can help you understand faster or solve better.
02 Map where AI accelerates learning, not just delivery. Use AI to synthesise customer insights, surface patterns across silos, and validate assumptions before committing resources.
03 Have the mandate conversation. What you are accountable for and what you own must match. Bring that gap to leadership as a design question, not a complaint.
04 Sort out governance before the rollout. Define which tools are approved, who can use them, and what the boundaries are, before teams start asking. Ambiguity does not protect the organisation. It just slows everyone down.

The organisations that will get this right are not the ones with the most advanced models or the largest AI budgets. They are the ones that asked the harder questions first: what business outcome are we serving, what are we learning, who owns the decision, and what does the organisation need to actually move.

AI without a mandate is just noise. Define what it is for, and it becomes something worth deploying.

Nästa
Nästa

How Change Actually Spreads