Your AI Problem Is Not an AI Problem

Most engineering teams are asking how to make AI fit how they work. That is the wrong question — and it explains why the numbers keep getting worse.

Your AI Problem Is Not an AI Problem

The problem with how most engineering teams are approaching AI is not that they are moving too slowly. It is that they are moving in the wrong direction entirely. Every framework, every workflow guide, every adoption playbook published in the last two years shares the same hidden assumption: that the goal is to take what your team already does and find an AI-shaped version of it. Write better specs. Review code faster. Generate tests you were not writing anyway. The process stays. The AI slots in.

This is the wrong question dressed up as progress. And it explains why the numbers keep getting worse.

In 2024, 17% of companies abandoned most of their AI initiatives. In 2025, that jumped to 42%. Gartner now projects that by the end of 2026, 60% of AI projects will be scrapped before reaching production. This is not a technology adoption curve flattening as the market matures. It is an abandonment curve — accelerating in the wrong direction, three years into what was supposed to be the most transformative shift in how software gets built.

The teams behind those numbers are not laggards. They ran the pilots. They invested seriously. What stopped them is not will or budget. It is that every tool, every workflow, every piece of advice they received was optimising the same thing: how to make AI behave more like the process they already had. When that fails — and it keeps failing — the conclusion is that the tools are not ready, or the team is not ready. Both diagnoses are wrong. The process is the problem.

Sun Tzu put it cleanly: strategy without tactics is the slowest route to victory. Tactics without strategy is the noise before the defeat. The market is producing tactics at scale. What is missing is the strategic question underneath all of them — which is not "how do we adopt AI?" but "what should we be doing differently now that AI exists?"


The Workflow Was Always a Workaround

To understand why embedding AI into existing processes fails, you have to understand what those processes were actually built for.

The way software teams work today — tickets, handoffs, documentation, code review, sprint rituals — was not designed from first principles. It evolved as a set of adaptations to human limitations. Tickets exist because humans need structured handoffs to communicate across time and context. Documentation exists because humans forget. Code review exists because humans make errors they cannot see in their own work. Standups exist because humans lose shared context without regular synchronisation.

These are not universal laws of software delivery. They are workarounds — elegant, battle-tested workarounds, but workarounds nonetheless — for the constraints of human cognition and human communication bandwidth.

AI does not share most of those constraints. It does not forget. It does not lose context between sessions in the way a developer picks up a ticket cold on a Monday morning. It does not have the same blind spots in its own output that make peer review necessary for humans. When you embed AI into a workflow built around human limitations, you are asking it to operate inside constraints designed for a problem it does not have. The battlefield stays muddy. You get AI writing tickets in the format humans invented to talk to other humans, reviewing code using checklists developed to catch human error patterns, generating documentation that solves the symptom — people not writing things down — while missing the disease entirely.

The disease is that the system was designed for humans, and AI is not a human replacement. It is something else.


The Knowledge That Never Gets Written Down

There is a second, deeper reason why the task-replacement approach fails — and it is one almost nobody in the current discourse is naming clearly.

Every specification framework, every documentation initiative, every structured requirements process shares the same fatal assumption: that if you write it down clearly enough, the AI can work with it. This assumption breaks against a hard reality. The most valuable knowledge in any engineering organisation is precisely the kind that never gets written down.

The judgement call that saved a launch three years ago. The architectural decision that looks arbitrary until you understand the regulatory constraint behind it. The reason the team quietly stopped using a particular pattern. The unspoken rule about what kind of pull request will get pushed back regardless of technical correctness. This knowledge lives in people's habits, in how a senior engineer reacts when they see a certain kind of problem, in what gets said in the room that never makes it into the ticket.

No documentation discipline retrieves it. You cannot write down what you do not know you know. An AI working from your written artefacts will reproduce the visible surface of your organisation while missing everything that actually makes it function. And when it produces output that is technically correct but organisationally wrong — violating a constraint nobody thought to document, repeating a mistake the team learned from years ago — the reaction is frustration with the AI rather than recognition that the problem was never a documentation problem in the first place.

This is also why the non-determinism bothers people more than it should. Engineers are trained to expect reproducibility. When AI produces different outputs from similar inputs, it feels unreliable. But the discomfort is not really about non-determinism — it is about trying to use a tool that operates probabilistically inside a process designed for deterministic outputs. The mismatch is architectural, not technical. Fix the architecture and the non-determinism becomes a feature, not a bug.


The Different Question

None of this is an argument for slowing down or waiting for a better framework to emerge. It is an argument for asking a different question.

The question most teams are asking is: how do we get AI to do what we currently do? The question worth asking is: what does this process look like if we design it from scratch, assuming AI is a native participant from the start?

That question produces different answers. A synchronisation mechanism designed assuming AI has full context of every change made since yesterday looks very different from a standup designed for humans sharing status with other humans. A quality process designed assuming AI has already handled the deterministic, checkable mistakes looks very different from a code review process built to catch the full range of human error. A knowledge management approach designed around the fact that AI can hold and connect vast amounts of explicit information looks very different from documentation written to compensate for human forgetting.

The goal is not to redesign everything at once. It is to identify which parts of how your team works were always workarounds — constraints that existed because of human limitations your team no longer has to work around in the same way — and start there. Those are the places where redesigning the workflow around AI's actual capabilities, rather than bolting AI onto the existing one, will produce something genuinely different.

McKinsey's research makes the practical case directly. Organisations reporting significant financial returns from AI were nearly three times more likely to have fundamentally redesigned their workflows as part of deployment — not bolted AI onto existing processes and hoped for the best. Not three times more likely to have better tools or bigger budgets. Three times more likely to have asked the right question first.

The teams that will look back at this period as a turning point are not the ones that adopted AI most aggressively. They are not the ones that wrote the best specs or ran the most thorough pilots. They are the ones that used AI as a reason to ask which parts of how they work were always constraints rather than choices — and had the nerve to stop working around them.

The hesitation most teams feel is not a signal that they are not ready. It is a signal that they already sense the thing most adoption frameworks are not saying: that what is being asked of them is not a tool change. It is a rethink. And they are right.

Talk to us about this

If this article touches something you are dealing with, we would be glad to have a conversation.