What Your Board Should Be Seeing and Probably Isn't

AI

If your company's regulator called tomorrow and asked you to describe your AI risk profile, what would you say? Not what management presented last quarter. What you actually know, as a director, about the AI systems running in production today, what decisions they're making, and how they're performing.

For most boards, the honest answer is: very little. They know the strategy. They don't know the operation.

Sixty percent of S&P 500 companies view AI as a material risk. Fifteen percent of boards are receiving AI-related metrics. That's not a reporting gap. It's a governance gap. And it has a structural cause that most boards haven't named yet.

I've written about the three questions every board should be asking about AI governance. This piece goes deeper on the third: whether the organization knows when the AI gets it wrong, and whether the board has the information to know it too.

The Honor System Problem

In every other domain of material risk, boards have an independent information path.

Audited financials come from an auditor who doesn't work for management. The audit committee's relationship with outside auditors exists specifically because boards can't rely solely on management to report financial performance accurately. Legal exposure is assessed by outside counsel with an obligation to the company, not to the executive presenting. Cybersecurity risk increasingly comes with independent assessment, penetration testing, and third-party review. These independent paths exist because governance experience has taught us that management's natural incentive is to present information that maintains confidence, not information that surfaces problems.

For AI governance, almost none of that infrastructure exists. There is no required external audit of AI systems. There is no mandated disclosure standard for AI risk. There is no independent assessment sitting between management's narrative and the board's understanding. In December 2025, the SEC's Investor Advisory Committee voted to recommend that the agency issue guidance on AI disclosure, citing a "lack of consistency" in what companies are reporting. The SEC has been tepid in response. The current US regulatory environment means that independent AI governance infrastructure won't come from outside anytime soon.

That leaves boards governing AI almost entirely on the honor system. The honor system isn't working. Sixty percent of companies view AI as a material risk. Fifteen percent of boards are receiving any metrics about it.

The practical consequence is this: a board that views AI as a material risk but receives only management's narrative about that risk isn't governing it. It's approving the narrative.

The Difference Between Strategy and Governance

Because there's no mandated independent path, the information the board receives is almost entirely determined by what management chooses to present. And management, operating under natural incentives, presents strategy updates.

AI strategy updates are designed to build confidence. They cover where investment is going, what capabilities are in development, how the organization compares to competitors. Management is usually good at producing them. They're forward-looking and optimistic by design.

AI governance reporting is designed to surface problems. It covers what's running in production, how it's performing, what's gone wrong, who's overriding what, and whether the oversight structure is actually working. It requires the organization to report on its own gaps. Without a mandate or an independent requirer, that reporting tends not to happen.

These two types of information have opposite purposes. Strategy updates are produced to inspire. Governance data is supposed to challenge. Most boards are receiving one and calling it both.

What Governance Reporting Actually Looks Like

Until external audit standards for AI emerge, boards have to require this information themselves. There are four categories that matter.

The AI risk register. A current, risk-classified inventory of AI systems in production. Not a list of projects in development. A live register that says: here are the systems running today, here is what each one is deciding, and here is what a failure would mean. Most organizations can produce a project pipeline. Very few have built a production risk register. The board can't govern what it can't see.

Model performance data. Models drift. Inputs change, the world shifts, the training data doesn't. A system that performed well at launch may be making systematically different decisions a year later without anyone noticing. The board needs a regular signal: which systems are performing within expected parameters, which are showing signs of drift, and what's being done about it. Not because directors need the technical details, but because persistent drift is a governance signal, not a technical one.

Incident and near-miss reporting. What went wrong, how often, and how fast was it caught? Near-miss reporting matters as much as incident reporting. A governance program where near-miss numbers are declining is usually not getting better. It's usually getting worse at surfacing problems. A board that sees only resolved incidents is seeing a curated version of the risk picture.

Override and escalation data. When the accountable person stopped or overrode an AI-driven decision, it should be documented, and the board should see a summary. That log answers two questions: is the accountability structure working, and is there a pattern of decisions being reversed under pressure that someone with standing thought was wrong? If overrides are happening frequently and without visibility, that's a signal about both the AI and the organization's relationship with it.

Why This Is a Personal Liability Question

The Caremark standard defines board oversight duties in US corporate law. Under Caremark, directors face potential personal liability in two situations: when the board fails to implement any reporting system for material risks, or when a system exists and the board consciously ignores what it shows.

In financial reporting, the audit committee has independent verification to anchor its oversight. In AI governance, the board has only what management provides. That makes the first Caremark trigger more likely: failing to implement a reporting system. And it makes the second harder to defend against: ignoring red flags is difficult to disprove when the board was only ever seeing one side of the picture.

Courts haven't yet issued Caremark judgments specific to AI. What that means is that the cases being built now are moving toward a precedent. Fifteen AI-related securities class actions were filed in 2024, more than double the 2023 count, and the trend is accelerating. The directors at the center of the first case that goes the distance will define what boards were expected to know and do. That's not a theoretical risk. It's a timeline question.

A board that has no governance data has no independent basis to evaluate what management says about AI risk. That's not governance. It's deference.

What the Board Should Ask For

The wrong question is "how is our AI program going?" It will get an answer. That answer will be a strategy update.

The right questions require governance data. Is there an AI risk register, and can I see the current version? Which systems in production carry the highest risk classification? How many AI-related incidents were reported last quarter, and how were they resolved? Were any AI-driven decisions overridden in the last quarter, and by whom? What's the escalation path when the model's outputs raise a concern?

If the answers require time to compile, that's important information. A governance program that can't produce a risk register on request hasn't been built yet.

The board's job isn't to interpret the data. It's to establish that the data exists, that someone is responsible for it, and that it arrives on a schedule the board controls, not one management sets.

What Good Looks Like

The boards handling this well haven't waited for regulators to mandate an independent information path. They've built one.

They've assigned clear ownership for AI governance reporting to a named executive who isn't the same person presenting the AI strategy. They receive a quarterly governance summary that covers the production risk register, model performance against baseline, incident count and resolution, and any overrides or escalations. The format is stable enough to surface trends. The accountable executives present it, not only the technology function.

Some are beginning to commission independent AI governance reviews on the same cycle as other independent assessments. That infrastructure doesn't yet exist at scale. The ones building it early are creating the standard the rest will be held to.

The ones waiting will be explaining later why they didn't.

Previous
Previous

The Workforce Question Boards Aren't Asking

Next
Next

Can You Defend That Decision in a Courtroom?