When Everyone Is Accountable for AI, No One Is

AI

Most organizations feel like they've handled AI accountability. They have a governance committee. A responsible AI policy. Oversight assigned across legal, compliance, technology, and risk. They've written it down and put it on a slide.

When something goes wrong, the structure collapses. Because the test of accountability isn't who signed the policy. It's who answers.

Shared accountability has a structural flaw. It's untestable in advance and impossible to act on under pressure. When an AI-driven process makes a consequential error, the question "who owns this?" shouldn't require a meeting to answer. If the answer is a committee, a list of functions, or a process that routes the decision through multiple stakeholders, the organization doesn't have accountability. It has the appearance of it.

And in a crisis, the appearance is worthless.

I've written about the three questions every board should be asking about AI governance. This piece goes deeper on the first of them: what accountability looks like when it's real.

What Shared Accountability Actually Looks Like

A cross-functional AI governance committee is formed. Technology brings the systems expertise. Legal brings the regulatory lens. Compliance owns the risk framework. Risk management owns the escalation process. Each function adds something real to the overall view.

The problem is that adding value is different from being accountable. A committee can advise, review, flag, and recommend. It can't own an outcome. No single member wakes up in the morning knowing they personally answer for what the AI decided yesterday. When a challenge arrives, the response is a meeting.

Accountability that requires a meeting isn't accountability. It's a process for distributing responsibility until it disappears.

What Named Accountability Actually Requires

Named accountability isn't about exposure or blame. It's about decision-making clarity. One person who understands what the system is doing across policy, process, and technology has the authority to stop it and answers for what it decides.

Three things have to be true for that to be real.

Comprehension. The accountable person has to understand the system well enough to make a judgment about it. Not technically fluent in the model architecture, but capable of understanding what the AI decides, how it decides it, and what can go wrong. And by system, that means the full picture: the policy that governs the AI, the process it operates within, and the technology that executes it. Someone who can only describe the AI in general terms isn't equipped to own what it does in specific situations.

Authority. The accountable person has to be able to act without needing committee approval for every decision. Stop the system. Change the process. Override a decision. If every move requires a sign-off chain, the accountability is nominal. But authority without transparency creates a different problem. The decisions made under that authority, including the decision to stop or override, need to be documented, visible, and reported. Authority and transparency aren't in tension. They're what make each other legitimate.

Consequence. The accountable person's name is attached to outcomes. When the regulator calls, when the customer sues, when the board asks: there's one person who owns the explanation and stands behind it.

Why Organizations Prefer Shared Accountability

The resistance to named accountability is real and understandable. Shared accountability feels safer for individuals. No single person is personally exposed. It feels collaborative and thorough. More eyes on the problem suggests more oversight. And it distributes risk across functions in a way that seems prudent.

What it actually does is make accountability untraceable when it matters. The more diffuse the ownership, the harder it is to stop anything, explain anything, or correct anything with speed. Risk doesn't disappear through distribution. It accumulates, because no one is watching the whole picture.

There's also a harder truth. Organizations often don't name a single accountable person because no one wants the role. Named accountability means you can't say the committee approved it, or that you escalated the concern. Your name is attached to what the AI decides, including when it's wrong. That's uncomfortable for individuals and convenient for organizations that want the appearance of governance without the reality of it. Assigning it to a committee protects everyone in the room. It just doesn't protect the organization, the customer, or the outcome.

The Business Owner Problem

The most common workaround is assigning accountability to the CIO or the technology function. I've written before about why that's a category error. The harder question is where accountability should sit instead.

The claims processing AI belongs to the head of claims. The lending decision model belongs to the head of lending. The hiring algorithm belongs to the business leader who owns the headcount. These are business decisions, not technology decisions, and they need business owners.

The CIO can own the infrastructure, the platform, the security posture. But the accountability for what the AI decides belongs to the person who owns the business outcome. Separating those two things looks like governance. It's actually a vacancy.

Zillow found this out at scale. Its AI-driven home-buying program had been purchasing properties since 2018. By 2021, the pricing models were consistently overvaluing homes. When the errors compounded, losses hit $528 million in a single quarter. Two thousand jobs were cut. When the CEO stepped up to take public ownership, the damage was already done.

Named accountability before the failure would have meant one person watching those pricing decisions, with enough comprehension to question what was happening and enough authority to stop it before losses reached half a billion dollars. The CEO taking accountability afterward shows the right instinct. It's just the wrong moment.

What the Board Should Look For

The board's job isn't to redesign accountability structures. It's to test whether real accountability exists.

Three questions establish that quickly. Is there one named individual accountable for the decisions this AI makes? Not a function, not a committee. One person. Can that person stop the process without committee approval? Do they understand the system well enough, across policy, process, and technology, to make that call?

If the answers are a list of names, or "the governance committee handles escalations," or "the CIO has overall technology accountability," the structure is missing something essential. It looks like governance. It isn't.

The board asking that question clearly, and expecting a clear answer, is itself an act of governance. The discomfort it creates is the point.

Next
Next

The Three Questions Every Board Should Be Asking About AI