We sit in a fair number of board meetings. For most of the last three years, the AI conversation in those rooms followed a very particular shape. The chief executive would describe an initiative; a non-executive would ask, with admirable politeness, what artificial intelligence actually was; the technology officer would offer an explanation; everyone would nod. Decisions were made on enthusiasm because nobody yet had the vocabulary to make them on anything else.
That shape has broken in the last twelve months. The boards we sit with have learned the vocabulary. The questions are sharper, narrower, and more useful. It is worth writing them down because they predict where the well-governed companies are going to go next.
§02From 'can it' to 'should it'
The most striking shift is that boards no longer ask whether a model can do a thing. They take that for granted. They ask whether the firm should let it. That is a categorically different conversation. It belongs to the audit committee, not the digital steering group, and the chairs of audit committees have noticed.
The companies handling this well have moved their AI governance into the same machinery as their existing risk governance. They have not built a parallel committee. They have added a line to the existing one.
§03From 'what model' to 'what evidence'
The other shift, quieter but more significant, is in how boards interrogate AI projects when they are presented for sign-off. The question used to be 'what model are you using?' The question now is 'what evidence do we have that this works, and how would we know if it stopped working?'
That is the right question. It is exactly the question one would ask of any other system the company depends on. The fact that boards now ask it of artificial intelligence is the clearest sign yet that AI is being absorbed into ordinary corporate operations rather than treated as a special category of risk.
“What evidence do we have that this works, and how would we know if it stopped working? That is the right question. It is exactly the question one would ask of any other system the company depends on.”
§04Three questions worth bringing to the next board pack
If you sit on a board, or if you are presenting to one, three questions tend to clarify any AI item very quickly. First — who, by name, is accountable if this system makes a wrong decision in production? Second — what would we have to see for us to switch it off? Third — when did we last actually look at the answers it is giving?
We have watched a great many board papers improve in the moment they are asked these questions. We have also watched a small number of projects quietly disappear from the pack the following quarter, which is — almost always — the right outcome.
§05What this looks like for executive teams
For a chief executive, the practical implication is that the board is no longer satisfied with progress reports. They want governance reports. They want named owners, defined kill-switches, and a rhythm of evidence. Those are not hard things to provide; they are unfamiliar things, particularly for technology functions that have spent the last decade arguing for fewer governance asks rather than more.
The good news is that companies that build this rhythm now will move faster, not slower. The slowest companies in this cycle are going to be the ones that wait until the regulator forces them to write it all down.
