4 min read
EY surveyed 975 C-suite leaders across 21 countries in mid-2025 and found that 99% of organizations have already experienced financial losses from AI-related risks. Nearly two-thirds suffered losses exceeding $1 million. The average loss: $4.4 million. This is not a future risk. It is a current one, and only 12% of C-suite leaders surveyed could correctly identify the right controls against the five most common AI risks.
The $4.4 Million Mistake: What EY's AI Governance Research Means for Every CIO in 2025
Here is a number that should land on every CIO's desk before any new AI initiative gets approved: $4.4 million.
That is the average financial loss organizations have already suffered from AI-related risks, according to EY's Responsible AI Pulse Survey, conducted with 975 C-suite leaders across 21 countries in August and September 2025. These were not hypothetical projections. They were measured losses, reported by executives who had lived through them.
And almost every organization in the study had experienced them. 99% of organizations surveyed reported financial losses from AI-related risks. Nearly two-thirds (64%) suffered losses exceeding $1 million.
The most common causes: non-compliance with AI regulations (57%), negative impacts to sustainability goals (55%), and biased AI outputs (53%).
This is not a future risk landscape. It is the current one.
The Confidence Gap That Makes This Worse
What makes EY's findings particularly striking is not the scale of the losses. It is the gap between how prepared leaders think they are and how prepared they actually are.
EY's first Responsible AI Pulse survey, conducted in March and April 2025, found a substantial disconnect between C-suite confidence in AI systems and the actual governance controls in place. While executives felt broadly well-prepared, the controls beneath that confidence were thin.
The second phase, conducted in August and September 2025, made the gap quantifiable. When asked to identify the correct controls against five common AI-related risks, only 12% of C-suite respondents answered correctly.
Not 12% had poor controls. 12% could identify what good controls look like. The other 88% of C-suite leaders at large enterprises, all with over $1 billion in revenue, cannot accurately describe what it takes to protect their organization from the AI risks they are already running.
This is not a technical failure. It is a knowledge and governance infrastructure failure, and it is compounding daily.
The Agentic AI Wildcard Nobody Is Ready For
EY's research adds a time-pressured dimension to the governance problem: the rapid rise of agentic AI.
In the 2025 survey, 76% of respondents said their organizations plan to use agentic AI within a year. Only 56% said they are familiar with its specific risks, meaning nearly one in four organizations planning to deploy autonomous AI agents has not yet developed a working understanding of what can go wrong.
Agentic AI systems, those capable of setting goals, taking actions, and operating across systems with limited human oversight, represent a categorically different risk profile than traditional generative AI. McKinsey's December 2025 board governance report found that 80% of organizations have already encountered risky behaviors from AI agents, including improper data exposure and unauthorized system access. McKinsey describes autonomous agents as "digital insiders": entities operating within corporate systems with varying levels of authority and no consistent oversight model.
When an AI agent makes a consequential decision in a procurement context, a customer communication, or a compliance filing, who is accountable? In most enterprises today, the honest answer is that no one has defined this yet.
The Board Visibility Problem
The governance failure is not limited to the executive suite. It extends, or more precisely, fails to extend, to the board.
McKinsey's December 2025 analysis of board AI oversight found that although 88% of companies use AI in at least one business function, only 39% of Fortune 100 companies have disclosed any form of board oversight of AI, whether through a committee, a director with AI expertise, or an ethics board.
Among directors globally, 66% report having "limited to no knowledge or experience" with AI. Nearly one in three say AI does not even appear on their board agendas.
The National Association of Corporate Directors' 2025 survey adds a further data point: fewer than 25% of companies have board-approved, structured AI policies. Most have principles or ethics statements, documents that signal intent without creating accountability. Only 15% of boards currently receive AI-related metrics from management.
The financial consequence of this board-level gap is now quantified. A 2025 MIT CISR study found that organizations with digitally and AI-savvy boards outperform their peers by 10.9 percentage points in return on equity. Organizations without AI-savvy boards trail their industry average by 3.8%.
The same governance gap that creates risk also destroys performance.
What the Companies Getting It Right Are Doing Differently
EY's research did not only document losses. It identified what separates the organizations avoiding them.
Companies with real-time AI monitoring and oversight committees report measurable gains in revenue, employee satisfaction, and cost savings, outcomes that elude organizations without those structures in place.
The distinction between governance as a compliance exercise and governance as a performance lever is not semantic. EY's data shows it is financial. Organizations treating responsible AI as an operational discipline, not a policy document, are the ones generating returns from AI while others absorb losses.
Four practices characterize these organizations:
1. Real-time monitoring of AI outputs, not retrospective auditing. By the time a biased output or compliance violation surfaces in a quarterly review, the damage has accumulated. Governance that operates in real time catches and corrects before losses compound.
2. Defined accountability structures for AI decisions. For every AI-driven process, a specific individual or team is responsible for its outputs. There are no unowned AI systems.
3. Oversight committees with business, not just IT, representation. EY's data shows that governance succeeds when it reflects the organization's business risk profile, not just its technical architecture. Business unit leaders understand the downstream consequences of AI errors that IT leaders may not.
4. Structured agentic AI policies before deployment, not after. Organizations deploying agentic AI without specific governance frameworks for autonomous decision-making are accepting undefined liability. The 76% planning agentic AI deployment within a year cannot afford to treat governance as a phase-two activity.
The CIO's Accountability Is Expanding
Deloitte's 2025 Tech Exec Survey of 622 U.S. senior technology leaders captured the scale of this shift. The role of the CIO is no longer primarily about technology delivery. It is about balancing security and efficiency, building enterprise cyber resilience, and driving customer trust through transparency in privacy and data governance.
This is not a soft mandate. It is a financial one.
The $4.4 million average loss EY documented is not a tail risk. It is the central tendency of what happens when governance lags adoption. With AI spending at large enterprises projected to grow from $14 million to $23 million in 2025, a 64% increase, organizations that have not built governance infrastructure commensurate with that investment are not just accepting operational risk. They are building a financial liability that will eventually surface.
Five Governance Controls Every CIO Must Have in Place
Based on EY's findings and McKinsey's board governance research, these are the five non-negotiable governance infrastructure elements for any CIO scaling AI in 2025:
1. A real-time AI monitoring capability. Not just logging, but active detection of bias, hallucination, compliance deviation, and unauthorized access across deployed AI systems.
2. A structured agentic AI policy. Define what decisions autonomous agents can make independently, what requires human escalation, and who owns accountability for each category.
3. Board-level AI reporting. Deliver a dashboard of AI-specific metrics to the board on a regular cadence: ROI by business unit, percentage of AI-enabled processes, override rates, and regulatory alignment status.
4. A board-approved AI risk framework. Not a principles statement, but a structured policy that defines acceptable use, accountability structures, and escalation procedures, reviewed and approved by the board.
5. A cross-functional AI oversight committee. Include representation from legal, finance, operations, and HR, not just IT. The most common AI risks, compliance failures, biased outputs, and sustainability impacts, are business risks first and technology risks second.
What Transparent AI Infrastructure Changes
The enterprises avoiding the $4.4 million loss are not running more conservative AI programs. EY's data makes clear they are running more ambitious ones, with stronger returns. The governance infrastructure that prevents losses is the same infrastructure that enables confident scaling.
An enterprise that can see exactly what its AI systems are doing, in real time, across every function, with clear accountability for every output, is not just a lower-risk enterprise. It is a higher-performing one. It can move faster because it can trust what it is moving with.
That is the governance opportunity most organizations are leaving unclaimed.
Aevah's Enterprise Intelligence OS is built with transparent decision architecture and 100% audit confidence at its core, so leaders can scale AI with the visibility and accountability that separates governance that protects from governance that merely signals.
Want to assess your current AI governance infrastructure against the risks EY documented?