What Boards May Be Missing While Chasing AI

AI

Every dollar spent on AI comes from somewhere else in the technology budget. That is the trade-off most boards aren’t tracking.

Budget Cannibalization: The Real Trade-off

AI is rarely funded with new money. It comes from the same pool that pays for the basics, the programs that keep the business secure, reliable, and compliant.

The danger is obvious: in chasing AI, organizations defer or underfund these essentials. Security updates get delayed. ERP upgrades are pushed back. Resilience investments are cut. All of this leaves the business more fragile just as it is placing bigger bets.

Boards don’t decide those trade-offs directly, but they must ensure the CEO is making them strategically. Too often, leaders fall back on “fair-share” cuts, spreading reductions evenly across functions in the name of balance. It may look equitable, but it’s dangerous. Critical areas end up underfunded when what’s really needed is deliberate prioritization.

A 2025 Campus Technology study reported that 52% of organizations say their AI security spending is eating into existing security budgets, a clear sign that AI investment is being funneled at the expense of foundational security.

Boards should be asking:

  • Which projects have been delayed or cancelled to fund AI?

  • What risks are we taking by deferring those projects?

  • Are those risks visible and explicitly accepted at the board level?

  • Is the CEO making cuts strategically, or simply spreading them evenly?

If AI is being funded by undermining the foundations, or if cuts are applied indiscriminately, the strategy is not sustainable. Boards should require regular review of deferred IT projects to ensure AI isn’t funded at the expense of critical foundations.

Reliability: The Cost of Deferred Investment

AI doesn’t run in a vacuum. It sits on top of networks, power, cloud capacity, and recovery systems. If those are underfunded, reliability becomes the silent risk multiplier.

Boards should remember: a single major outage can erase millions in revenue, damage customer trust, and trigger compliance exposure if critical services fail. According to the Uptime Institute’s 2024 Outage Analysis Report, more than two-thirds of major outages cost over $100,000 in business impact, but the largest can run into the millions.

Boards should insist on a quarterly resilience risk briefing that highlights whether essential investments are being delayed to make room for AI.

Cost Escalation: The Financial Blind Spot

The first AI invoice is never the last. Compute and licensing are just the entry point. As projects scale, they drive demand for storage, bandwidth, monitoring, and specialized staff. These costs don’t just add up, they compound, often faster than boards are shown in forecasts.

In 2025, average monthly AI budgets are rising by approximately 36% year-over-year, underlining how quickly cost escalation can erode oversight if forecasts lag spending growth (CloudZero State of AI Costs, 2025).

Do we have a total cost of ownership model that shows how AI spending will grow over the next three to five years, or are we only seeing the first invoice?

Compliance and Disclosure: The Governance Blind Spot

AI also brings regulatory exposure. New disclosure requirements are expanding. Europe’s Corporate Sustainability Reporting Directive is already in force, and the SEC has proposed climate-related rules.

Boards should also note that the EU AI Act — the first comprehensive AI regulation — entered into force in May 2021, with enforceable transparency and accountability obligations beginning in 2023 (Investopedia). If AI materially increases energy demand without a plan to manage it, boards may face compliance exposure and credibility challenges with both regulators and investors.

If AI materially increases our energy demand, how will we explain that to regulators and investors?

Decision Rights: The Governance Frontier

Consider credit scoring systems that now automatically reject loan applications, or hiring filters that screen out candidates before any human reviews their résumé. Each of these decisions was once made by people. Now, AI systems approve loans, filter job candidates, flag transactions as fraudulent, and recommend products, with little or no human review. When a customer is denied credit by an algorithm, who owns that decision?

Every new capability raises questions of ownership and accountability:

  • Who approves model updates?

  • Who owns bias testing and monitoring?

  • Who decides when to shut down a problematic system?

  • Who governs customer data use, including consent models, retention policies, and ethical safeguards?

The challenge goes deeper. Traditional IT governance assumes that, ultimately, humans make the final call. Accountability frameworks designed for human decision-making don’t map cleanly to systems that operate independently.

Boards are used to governance models where accountability ends with a human signature. AI shifts that. Systems can act, adapt, and evolve in ways that may not always be visible. Governance must shift from approving tools to governing actors, defining clear decision rights for model changes, assigning ownership of customer data policies, and establishing explicit triggers for when a system must be paused or shut down.

That’s the governance frontier. Boards must ask not just who decides, but how accountability works when the system itself is the decision-maker.

What accountability framework do we have for AI systems that make autonomous decisions at scale, and who stands behind those decisions when things go wrong?

Closing Reflection

AI deserves board-level attention. But oversight isn’t about the excitement of what’s possible; it’s about clarity: what’s at risk, what’s being deferred, and how governance itself must adapt.

The boards that succeed won’t be the ones that endorse AI. They’ll be the ones who ensure it is funded deliberately, built on resilient foundations, and governed with a recognition that AI changes not just what the organization does, but how decisions are made.

Good questions are not enough if the information flow to the board is filtered or incomplete. Boards should also invest in ongoing education on AI risk and governance, as technologies, regulations, and risks continue to evolve.

Ultimately, it won’t be the breakthroughs that determine success. It will be whether the business is stronger or more fragile after the excitement fades.

Next
Next

Superhuman Senses - Extending Human Perception