The Workforce Question Boards Aren't Asking
Most boards have had some version of the AI workforce conversation. Jobs displaced. Jobs created. Retraining commitments. The company's obligation to employees whose roles are changing. These are legitimate and important questions. The boards asking them are doing something right.
They're not asking the harder one.
When your AI gets something wrong, do the people responsible for catching it still know how to catch it? Not in theory. In practice, under pressure, in real time. The governance question isn't only what AI does to headcount. It's what AI does to human judgment, and whether the organization has thought carefully about losing it.
I've written about the three questions every board should be asking about AI governance, the accountability structures that make oversight real, and the reporting that lets a board see what's actually happening. This piece is about the question underneath all of it: what happens to human capability when AI handles more of the decisions?
A Lesson from Aviation
On June 1, 2009, Air France Flight 447 departed Rio de Janeiro for Paris. Midway through the flight, the autopilot disengaged. Ice had blocked the aircraft's pitot tubes, leaving the crew without reliable airspeed data. The aircraft was controllable. Manual flight at cruising altitude was unusual but within standard training.
What the French aviation investigation authority, the BEA, found was something more troubling than equipment failure. The crew, startled by the sudden shift from automated to manual flight, never recovered their situational awareness. For several minutes the aircraft remained in a sustained stall while the crew applied inputs that made it worse. They didn't recognize the situation they were in. All 228 people on board were killed.
The BEA's investigation traced part of the cause to what researchers call the automation paradox: the more reliable an automated system, the less frequently its operators are called on to develop and maintain the skills to function without it. The crew could fly with the autopilot. When the autopilot disengaged, the instincts that should have been there weren't. Not because the pilots were incompetent. Because the system of training and deployment hadn't accounted for what sustained automation was doing to their manual capability.
This is not a story about bad technology. It's a story about what happens to human skill in an environment of high automation reliability. And it applies directly to AI governance.
The Organizational Version of This Risk
When AI handles the decisions, do your people still know how to make them?
When AI processes the credit applications, do your underwriters still have the judgment to evaluate a case the model flags as borderline? When AI screens job candidates, do your hiring managers still understand what they're looking for well enough to catch bias in the output? When AI manages supply chain logistics, do your operations teams understand the system well enough to recognize when it's optimizing toward the wrong objective?
These aren't rhetorical questions. They're governance questions. And they connect directly to the accountability structure every board should already be demanding.
In an earlier piece in this series, I described the accountable person for an AI-driven process as someone who understands what the system is doing across policy, process, and technology, and has the authority to stop it. That comprehension isn't static. It can atrophy. A person who has spent two years approving AI outputs rather than exercising independent judgment is losing, in real time, the understanding they need to recognize when something is wrong.
The board that has named an accountable person but hasn't asked whether that person retains genuine comprehension hasn't completed the governance structure. It's left it with a gap it can't see.
The Two Questions Boards Need to Add
The first is about accountability for the people, not just the process. Who owns the workforce implications of AI deployment? Not headcount, not the retraining budget, but capability. Which human skills are being displaced by which AI systems, at what pace, and what is the organization doing to ensure it retains the judgment it needs to govern what it's built?
Most organizations can answer the headcount version of this question. Very few can answer the capability version. A governance committee isn't an answer. A named individual who understands which human capabilities are at risk, and is accountable for maintaining them, is.
The second question is about dependency. For each high-risk AI system in production, what is the organization's ability to function if the system produces a wrong or unexpected output? Can the accountable person describe what a failure looks like? Can the team catch it? Has anyone tested whether the people responsible for oversight can actually exercise it?
That question sounds harder to operationalize than a risk register. It is. But the BEA didn't conclude that Air France's pilots were unqualified. It concluded that the system of training and deployment hadn't kept pace with what automation was doing to their skills. The same dynamic plays out in any organization that hands consequential decisions to AI without tracking the human capability required to govern it.
What Good Looks Like
Organizations handling this well have added a question to every major AI deployment decision: what happens to the people who used to make this judgment, and what do we need from them going forward?
That question has two distinct parts. The first is the social and ethical obligation most boards are already discussing: fair transition, honest communication, genuine investment in people whose roles are changing. The second is the governance obligation most boards aren't discussing yet: ensuring that the humans who remain in the loop have the comprehension and skill to actually be in the loop.
Boards don't need to design the training program. They need to ask whether one exists for this specific purpose, who owns it, and what evidence they're seeing that it's working. The same discipline that applies to financial controls and AI risk registers applies here. Data, cadence, ownership.
AI governance that doesn't account for human capability alongside AI capability is governance with a blind spot. The organizations closing it aren't only protecting their employees. They're protecting their ability to govern AI at all. That's the only thing that stands between a consequential mistake and a catastrophic one.