Skip to main content
The journal
StrategyMay 20268 min

Enterprise procurement has found its questions. Most AI vendors are still rehearsing their answers.

The enterprise AI buying cycle changed materially in the first quarter of 2026 — longer timelines, real technical due diligence, and procurement questionnaires most vendors cannot honestly complete. The shift is being driven by boards who authorised AI programmes in 2023 and 2024 and have since seen what they produced. Here is what the smart buyers are now testing for, and why the firms that have only ever sold demos will not pass.

By
Julian R. Mountford
Founder & Chairman
Enterprise procurement has found its questions. Most AI vendors are still rehearsing their answers.

Something changed in the enterprise buying room in the first quarter of this year. The deals that would have closed on a working prototype and a credible team — the kind we were signing on a forty-five-minute slide deck with a live demo eighteen months ago — are now taking a minimum of six additional months, passing through two or three approval stages that did not exist before, and arriving with procurement questionnaires that, in several cases, have been longer and more detailed than anything a major consultancy would face bidding on a ten-year managed services contract.

The reason is straightforward. The boards that authorised AI programmes in 2023 and 2024 have now seen the results. Some of those results have been excellent. A significant number have not. The ones that were not have produced a particular kind of organisational learning — not a retreat from AI, but a finely calibrated scepticism about the firms selling it, the commitments those firms are willing to make, and the difference between a vendor who has shipped something and a vendor who has operated something.

We have tracked this shift across fourteen procurement processes since January. The technical due diligence is no longer ceremonial. We have sat in rooms where a senior engineer from the buying organisation spent three hours with our engineers discussing model versioning, production drift, rollback procedures, and incident response protocols. Those conversations did not exist two years ago, or if they did, they were conducted by a junior procurement analyst with a checkbox. They are now being run by people who have been burned once and do not intend to be burned again.

§02The governance document most vendors cannot produce

The most visible change is in what buyers now mean when they ask about AI governance. In 2023, a governance document was a slide deck with a commitment to responsible AI and a list of the company's ethical principles. The sophisticated buyers have moved past that entirely. What they ask for now is operational: who decides when a model is retrained, what criteria trigger that decision, who reviews the output before a change enters production, and how that review is audited.

This matters because every model in production changes over time. The model the buying organisation evaluated at procurement is not identical to the model processing their workflows six months later, and in most enterprise AI deployments there is no formal record of why it changed or who approved the change. The buyers who encountered problems in 2024 were not, in most cases, damaged by a model that was initially bad. They were damaged by a model that drifted quietly, without anyone having written down what would happen when it did, and without the governance infrastructure to determine whether the drift was intentional or accidental.

We have had three procurement cycles in the first quarter where a competitor could not produce a governance document. Not a deficient one — they could not produce any version of one. Our document was first written in 2022 and has been substantially revised twice since. It took incidents to revise it, and those revision notes are visible in the document's history. A sophisticated buyer reads a governance document that has been revised in response to real problems as evidence that the vendor has been close enough to those problems to be forced to change their thinking. A pristine document written in the last three months reads precisely as that.

§03What the case study omits

The second shift is in what counts as evidence of delivery capability. The case study is not dead — we still produce them — but the enterprise buyers who have been through a bad deployment have learned to read them with new scepticism. A case study is a marketing document. It describes a project through the lens of its outcome. It omits the weeks where the system was in shadow mode because the output was not ready, the compliance review that required redesigning a data pipeline, the integration that took four attempts to stabilise. The experienced buyer knows this.

What the smart buyers are now asking for — and what we have been asked to provide, under NDA, in two engagements this quarter — is something closer to a production postmortem. Not a polished document: a real description of what went wrong in a comparable deployment and what the vendor did about it. The willingness to describe failure in detail is the most reliable signal to an experienced buyer that a vendor has actually operated a system rather than simply shipped it.

We have had incidents. They are not incidents we advertise, but they are incidents we documented. An agentic pipeline that ran against a changed document corpus for six days before the output quality degradation was noticed. A batch reconciliation agent that silently skipped items and returned a result that passed automated validation. In a procurement conversation in February, we described both in detail to a prospective client's technical director. We won the engagement. In the post-decision debrief, she told us we were the only vendor to bring a failed deployment into the room — and the only one to explain, specifically, what had changed in our architecture as a result.

The willingness to describe failure in detail is the most reliable signal to an experienced buyer that a vendor has actually operated a system rather than simply shipped it.

§04The name on the contract

The simplest requirement — and the one that eliminates the largest number of vendors in the current market — is personal accountability. The question, framed as we have now heard it in five separate procurement conversations: who, at your firm, is personally accountable if this system produces a materially wrong output that damages our business? Not a liability cap. Not an indemnity clause. A name, a direct number, and a contractual commitment that the named individual is reachable within four hours of a production incident.

Most AI vendor contracts are structured to make this question unanswerable in the affirmative. The liability exclusions, the disclaimers about model output, the force majeure clauses broad enough to cover a model provider's own service degradation — they point collectively in the opposite direction from accountability. The enterprise buyers who have read enough of those contracts have learned to ask the question before the contract is drafted, which is where it belongs.

We have negotiated this clause in two separate engagements in the last quarter. In one case it required a contract structure we had not previously used. In both cases we could answer the question, because we have a standing practice of naming a senior engineer and a delivery lead in every production engagement with a four-hour response commitment written into the statement of work. The firms that could not answer it, in the same procurement processes, declined to answer it. Neither of them won.

§05Where the market is going

What these three tests add up to is a market that is starting to distinguish between firms that have been close to production AI systems over time and firms that have been close to pitching them. Those are very different bodies of experience, and for two years the pitch was compelling enough that the distinction was hard to identify in a procurement room. It is no longer hard to identify, because the buyers know which questions expose it.

The firms that will win the significant enterprise programmes in the second half of 2026 are not going to be the firms with the most refined demos. They are going to be the firms that can produce, without advance notice: a governance document that has been revised in response to real incidents; a postmortem that shows the vendor was present when something went wrong; and a named senior who is contractually accountable for the system in production. None of these are difficult to produce if you have been operating production AI systems. All of them are impossible to produce convincingly if you have not.

The two-year period in which a credible demo and a confident team could close an enterprise AI contract appears to be over. What replaced it is, in our view, a better market — not necessarily a faster or easier one, but one that rewards the right things more reliably than the previous version did. That is worth noting even for the firms, including ours, who find the new procurement process considerably harder to move through than the old one.

§06The clients worth being patient for

The clients we are working most closely with right now are not the ones who moved fastest in 2023. They are the ones who asked the most uncomfortable questions before they would agree to start. They insisted on governance documentation that felt, at the time, like excessive rigour for a prototype. They wanted to know, before a line of code was written, who would be accountable if the system failed in production and what the rollback procedure looked like. Several had been through a bad AI deployment with a different vendor and were not prepared to repeat it.

Those clients have the most stable, the most trusted, and the most consistently used AI systems in their portfolios right now. The rigour they demanded at procurement is the reason those systems are still running. It would be tempting to draw from this a conclusion about patience — that the cautious clients are always the best clients. The more useful conclusion is narrower: clients who make the procurement process harder by asking better questions tend to make the delivery more successful by establishing clearer accountability from the start. That is not always comfortable. It is almost always worth it.

About the author
Julian R. Mountford
Founder & Chairman

Every piece in the Journal is written personally by a senior practitioner, drawing on the engagement that motivated it. No ghostwriters, no content team, no models. If a paragraph here resonates with a problem you are looking at, the author is the person to reply to — direct lines beat anonymous inboxes.

Get in touch with the practice