The Three Questions Every Board Should Be Asking About AI

AI

Boards have always governed what they don't fully understand technically. Capital markets, legal exposure, cybersecurity: none of these required directors to become practitioners. They required something different and arguably harder. The discipline to ask the right questions, at the right cadence, and hold management accountable for the answers.

AI is no different. Except the pace is.

The capabilities are compounding in months. The governance frameworks are still catching up. That gap, between what AI can now do and what organizations can actually be accountable for, is where most of the risk lives.

Boards aren't failing because they don't understand large language models or computer vision or reinforcement learning. They're failing because they haven't established the governance structures that make accountability real. They're approving AI investment, nodding at AI strategy, and assuming that management has the rest covered.

Most of the time, management doesn't have the rest covered either.

The board's job here isn't as complicated as it sounds. It doesn't require technical fluency. It requires three questions, asked consistently, with the expectation of clear answers.

I've written before about the broader financial and operational risks boards face while chasing AI investment. This piece is about the governance framework underneath: what accountability actually looks like when AI is the decision-maker.

The First Question: Who Is Actually Accountable?

When the AI makes a consequential decision, and makes it wrong, who owns that?

Not which team. Not which system. Not which vendor. One named individual who understands what the system is doing, carries the authority to stop it, and answers for the outcome.

And by system, that means all of it: the policy that governs the AI, the process it operates within, and the technology that executes it. These aren't separate problems with separate owners. They're one system, and accountability has to match that scope.

This sounds obvious. It almost never is.

The pattern most organizations fall into is distributed accountability. In practice, that means no accountability. The AI system was built by technology. Deployed by operations. Governed by a risk committee. Overseen by compliance. When something goes wrong, everyone is responsible in principle and no one is responsible in fact. The accountability is so diffuse it becomes impossible to act on.

The CIO is the most common catch-all, and it's the wrong answer. Assigning AI accountability to a technology leader for a system making business decisions is a category error. It separates the person who owns the outcome from the person who's accountable for it. The functional or business owner of the process the AI is running needs to own the accountability for what the AI decides. That's the person who understands the stakes, answers to the customers, and has the authority to stop it.

The board's job is to ask: who is that person, is accountability formally and explicitly assigned to them, and do they actually have the power to act?

The Second Question: Can You Explain and Demonstrate That Decision?

Not the system. The decision.

There's a version of AI transparency that organizations are reasonably good at. They can describe how their AI works in general terms. They can explain the training data, the model architecture, the intended use case. They can produce a responsible AI policy that sounds serious.

What most organizations can't do is defend a specific decision to someone with standing to challenge it.

Imagine a customer who was declined, a regulator who is investigating, or a jury evaluating whether the outcome was lawful and fair. The question they're asking isn't how the system works. It's why this person, in this situation, at this moment, received this outcome. Can you show your work?

The Dutch government's tax authority ran into this at scale. Between 2005 and 2019, an algorithm flagged childcare benefit claims as potentially fraudulent. More than 26,000 families were wrongly accused and forced to repay. When challenged, the system could describe its general logic but not explain specific decisions. The Dutch cabinet resigned in January 2021. More than 2,000 children lost custody as a result.

That failure mode isn't specific to government. The documentation doesn't exist. The decision logic wasn't preserved. The audit trail was never built. And when the challenge arrives, the response is a general description of the system rather than a specific account of the decision.

That gap is where liability lives.

The board's question isn't whether the AI is explainable in theory. It's whether the organization can explain and demonstrate a specific decision, to a specific customer, regulator, or jury, when it matters.

The Third Question: Do You Know When It Gets It Wrong?

The AI will get it wrong. That's not a failure of the technology. It's what happens when any system operates in a complex world with incomplete information. The question isn't whether errors will happen. It's whether the organization knows before the damage compounds.

Most governance conversations about AI error focus on response: how fast can we correct course once we know? That matters. But it's the second problem. The first problem is detection.

Does the organization find out when the AI gets it wrong because its monitoring systems caught it, or because a customer complained, a regulator called, or a journalist filed a story? Is there someone whose job it is to watch, with the authority to act on what they see?

And when an error surfaces, is the person accountable for it empowered to move immediately? Or does correction require a committee, a change management process, a sign-off chain that takes weeks?

There's a harder version of this question. What happens when someone does know and gets overruled?

An AI system flagging problems at quarter close creates a revenue conversation. Stopping it costs something. But an executive who overrides a stop decision to protect the numbers hasn't solved the problem. They've owned it personally.

The pattern isn't new. Wells Fargo executives knew about fraudulent account creation as far back as 2002. The behavior continued under pressure to hit sales targets. When it surfaced, the CEO resigned, individual executives faced personal fines in the tens of millions, and the bank paid $3 billion in criminal and civil settlements. What turned a conduct problem into a catastrophic liability was the evidence that people knew and chose not to act.

AI makes this pattern easier to document and harder to escape. Well-designed detection systems create records. Override decisions create records too. When a board asks whether the organization can stop the AI when it gets it wrong, the honest follow-on question is: what happens when stopping it is inconvenient?

The board's question is: when the AI gets it wrong, does the organization know before the damage is done, and does the person accountable for it have the authority to act, even when acting is costly?

What This Looks Like in Practice

These three questions aren't a one-time review. They're a cadence.

Boards that are governing AI well aren't the ones that approved a responsible AI policy two years ago and moved on. They're the ones asking these questions in the same rhythm they ask about financial controls and legal exposure. Quarterly at minimum. More often when the technology is moving fast or the stakes are high.

The answers will change. AI systems evolve. The regulatory landscape is shifting, and in the US it's shifting away from federal enforcement. That doesn't reduce the board's responsibility. It increases it. The workforce implications are still playing out. What these three questions are really testing is whether management has built the structures that can absorb that change without losing accountability in the process.

Accountability, transparency, and detection are the foundation. The fuller question set builds from there: the AI risk register, regulatory exposure, the organization's commitment to employees whose roles are changing, and the board's own stated risk appetite for AI.

Those questions are worth their own treatment. But they rest on the three.

The board's job isn't to understand the AI. It's to ensure that someone named, empowered, and accountable does.

Previous
Previous

When Everyone Is Accountable for AI, No One Is

Next
Next

The Amnesia of Outages - A Culture Shift