Why 74% of Enterprises Are Getting No Value From AI and the Architecture Problem Nobody Is Talking About
The AI Profitability Problem
Three (3) years into the generative AI era, the headline statistic should disturb every C-suite leader: BCG found that 74% of companies have yet to show any tangible value from AI despite widespread investment. McKinsey puts the same finding another way: 88% of organizations use AI in at least one function, but only 39% report any EBIT impact at the enterprise level. This blog dissects the structural reason why, and what it takes to cross from spending to returning.
Three (3) years into the generative AI era, most C-suite leaders share a version of the same private frustration. The investment is real. The announcements were made. The pilots ran. And the financial results are underwhelming.
The data confirms that this is not an isolated experience.
BCG's research found that 74% of companies have yet to show any tangible value from AI despite widespread investment. Only 4% have achieved what BCG classifies as cutting-edge AI capabilities across the enterprise. McKinsey's State of AI survey, conducted across 1,993 participants in 105 countries in 2025, puts the same finding in sharper relief: 88% of organizations use AI in at least one business function, but only 39% report any EBIT impact at the enterprise level. Of those, most see less than 5% improvement.
These two research bodies are measuring different things but arriving at the same conclusion. There is a widening gap between AI activity and AI value, and it is structural, not motivational.
Understanding that distinction is the difference between doubling down on the wrong strategy and making the one change that moves the needle.
The False Comfort of "We Have AI Tools"
The standard enterprise AI narrative in most boardrooms runs something like this: "We've deployed AI across multiple functions, we have active pilots, our teams are trained, and we're seeing efficiency gains in targeted areas."
All of that can be true. And the enterprise can still be in the 74%.
Here's why: use-case-level efficiency and enterprise-level financial impact are two fundamentally different measurements. McKinsey's data makes this distinction explicit. 64% of organizations say AI is enabling their innovation at the use-case level. But that use-case value is not translating upward. Only 39% see any EBIT movement.
The gap between those two numbers, 64% and 39%, is not a measurement lag. It is the architecture gap.
The Architecture Gap: What Is Actually Missing
When AI is deployed by function, marketing buys one tool, operations buys another, and finance uses a third. Each creates isolated value within its own lane. The marketing team saves time on content generation. The operations team improves forecasting accuracy. Finance automates reporting.
None of these gains connect. They don't inform each other's decisions, share data models, or produce insights that cross organizational boundaries. Leadership still makes strategic decisions based on fragmented information, some AI-assisted and some not, with no unified intelligence layer to synthesize it.
This is the architecture problem that explains both BCG's 74% and McKinsey's 39%.
A Harvard Business Review analysis of enterprise AI deployments documented the downstream consequence: organizations running separate AI models by department reach contradictory conclusions about the same business reality. In one documented case, a risk team flagged customers as too high-risk at the same time a marketing team targeted those identical customers for growth, because each team's AI was operating on separate data with no shared intelligence layer to identify the conflict.
That is not an AI problem. It is an orchestration problem.
The Data Convergence: What BCG and McKinsey Are Really Telling Us
BCG and McKinsey approach AI performance measurement from different angles, but their findings converge in a way that every C-suite leader should understand.
McKinsey focuses on operational maturity: how widely AI is deployed, at what stage of scaling, and what functional impact is being captured. Their finding that 88% of organizations use AI but only 39% see EBIT impact is a measurement of the adoption-to-outcome conversion rate.
BCG focuses on strategic value, specifically who is capturing the gains and at what scale. Their finding that 74% have yet to generate tangible value, with only 4% achieving cutting-edge enterprise-wide capability, describes the distribution of AI returns: a winner-take-most dynamic where the top tier pulls ahead while the majority circles the same ground.
Read together, the picture is clear. Broad, function-level AI adoption is producing use-case gains for most organizations, but enterprise-level financial returns are concentrating in a small group of companies that have made a different structural choice.
What separates them is not the sophistication of their AI models. It is the presence or absence of an intelligence architecture that connects those models to strategy, to leadership decision-making, and to business outcomes.
The Abandonment Signal Nobody Is Discussing
S&P Global's 2025 analysis introduced a data point that deserves direct attention from every CIO and CEO allocating AI budget: the share of companies abandoning most of their AI projects jumped to 42% in 2025, up from just 17% the prior year.
This acceleration is significant. It indicates that the initial wave of AI enthusiasm, funded by innovation budgets and fueled by competitive pressure, is hitting a wall of unmet expectations. Organizations that cannot demonstrate measurable value are withdrawing rather than doubling down.
The underlying drivers, per multiple research reports, are consistent: cost escalation, unclear value measurement, and AI initiatives that are disconnected from core business objectives. These are symptoms of the same architecture gap, not the absence of AI capability, but the absence of the structure needed to translate capability into outcomes.
The CFO who cuts an AI program that showed no return is not making the wrong call. The problem is that the program was structured in a way that made return nearly impossible to demonstrate, because it was never connected to a measurable enterprise outcome in the first place.
What the 4% Are Doing Differently
BCG's research on the small group of companies achieving cutting-edge, enterprise-wide AI impact identifies a consistent set of differentiators. They are not running more pilots. They are not spending more on models. They are doing something structurally different.
- They set growth and innovation objectives for AI, not just efficiency. McKinsey's data shows that 80% of organizations set efficiency as the primary objective of their AI programs. High performers set growth or innovation as the primary objective. This shifts what gets built, how it gets measured, and what counts as success.
- They redesign workflows, not just tasks. McKinsey tested 25 attributes for correlation with enterprise-level AI financial impact. Workflow redesign had the single strongest effect. High performers are three times more likely to have fundamentally redesigned core workflows as part of their AI investment, rather than simply adding AI on top of existing processes.
- They establish business-IT co-responsibility. BCG's research identifies this as a defining characteristic of AI leaders. Business unit owners and technology leaders share accountability for AI outcomes, which means AI programs are anchored to business results, not IT delivery milestones.
- They build modular, reusable architecture. Rather than deploying bespoke tools per function, leading enterprises build AI capabilities designed for reuse across the organization. This is the architectural choice that allows isolated gains to compound into enterprise impact.
- They have a multi-year, CEO-sponsored vision. BCG identifies CEO-level sponsorship, not just endorsement, as a critical accelerator. This means the CEO has a working understanding of where AI fits the company's strategy and is actively involved in evaluating whether it is delivering against that strategy.
The Four Decisions Every C-Suite Must Make
Moving from the 74% to the 4% is not a technology decision. It is a leadership decision, made across four dimensions.
What are we measuring? Define enterprise-level AI success in financial terms before deploying. Use-case efficiency is a leading indicator, not the destination.
Who owns the outcome? Establish cross-functional ownership of AI returns, with business leaders and technology leaders jointly accountable, not IT alone. Deloitte's 2025 research found that when the CTO, CFO, and Chief Strategy Officer jointly own technology investment decisions, organizations are significantly more likely to see above-average EBITDA.
How are our AI investments connected? Map your current AI deployments against your strategic decisions. If your AI tools are not informing each other, they are not informing your strategy.
Where is the intelligence layer? Identify where in your architecture the synthesis happens, where your AI outputs become connected insights, accessible to leadership and trackable against business outcomes. If you cannot answer this, you have the architecture gap.
What Connected Enterprise Intelligence Changes
The enterprises generating real financial returns from AI are not running better point solutions. They have built, or deployed, the orchestration layer that connects their AI investments to each other, to their data, and to the decisions their leadership makes every day.
This is what shifts AI from a collection of functional experiments into a strategic asset. It is what allows leadership to ask a question about customer risk, market opportunity, or operational performance and receive an answer synthesized across every AI-enabled system in the enterprise, not fragmented across siloed dashboards.
The 74% statistic will not improve by deploying more AI tools. It improves when enterprises build the intelligence infrastructure that makes the tools they already have actually work together.
Aevah is the Enterprise Intelligence OS that connects your people, processes, and platforms, awakening the organizational intelligence already inside your systems and delivering the strategic clarity to act on it.
If you'd like to understand where your organization sits on the AI maturity curve and what it would take to cross into the top tier of enterprise AI performance, request a 20-minute AI Maturity Assessment.
