AI Governance Checklist for Corporate Boards

AI

This checklist distills the core governance questions from the five-part series AI Governance for Boards. It is organized around the four dimensions that matter most at the board level: who is accountable, what can be explained, what the board can actually see, and whether humans can still function without the AI.

Use it to assess your organization's current posture, identify gaps, and structure the right conversations with management.

If a question makes you uncertain, that uncertainty is the point.

Accountability

Article 2, Who Is Actually Responsible?

  • Is there a named individual accountable for AI governance outcomes?

  • Does that individual have the authority to stop or modify AI systems without escalation?

  • Do they have independent access to information when something goes wrong?

  • Is the accountable individual's performance evaluation tied to AI governance outcomes?

  • When was the last time the accountable individual halted or overruled an AI deployment?

Transparency

Article 3, The Defensibility Divide

  • Can the organization explain why a specific AI decision was made?

  • Has management demonstrated the AI audit trail for a real consequential decision to the board?

  • Has the board distinguished system transparency from decision transparency?

  • Has external counsel reviewed a sample AI decision explanation for legal sufficiency?

  • Has the organization documented the data sources and training inputs for its most consequential AI systems?

Detection and Reporting

Article 4, The Honor System Problem

  • Are there near-miss reports for AI incidents, not just confirmed failures?

  • Are those metrics sourced independently from the AI operations team?

  • Would the board know about an AI failure before it reached the media?

  • Does the board have independent paths for surfacing AI problems?

  • When did the board last test whether reporting would catch a real problem?

Human Capability

Article 5, The Workforce Question

  • Has management identified the decisions humans can no longer make without AI assistance?

  • Are there defined override protocols for when AI systems fail?

  • Has management identified roles where AI dependency has crossed a threshold that creates operational risk?

  • Has the board evaluated the long-term risk of capability atrophy in critical functions?

  • Has the board seen evidence of operational continuity when AI fails, or only heard assurances?

If the nos outnumber the yeses, you are not alone. But being in good company does not reduce your exposure.

The full analysis behind each section is available in the five-part series at jasonconyard.com. Start with Article 1: Three Questions Every Board Should Be Asking About AI.




Next
Next

The Workforce Question Boards Aren't Asking