Search

 

GRANT THORNTON

 

 

2026 AI Impact Survey Report

 

The AI proof gap:  Why AI isn’t delivering the performance leaders expected

 

Most organizations are scaling AI they cannot explain, measure or defend. Our survey of 950 C-suite and senior business leaders reveals why — and what the organizations pulling ahead are doing differently.

 
 
%

lack strong confidence they could pass an independent AI governance audit in 90 days

x

Organizations with fully integrated AI are nearly four times more likely to report AI-driven revenue growth than those still piloting (58% to 15%)

%

cite governance and compliance failures as a leading cause of AI underperformance

 
 

INTRODUCTION

 

 

Executives are scaling AI.

They are not governing it.

 
 

Seventy-eight percent of business executives in Grant Thornton's 2026 AI Impact Survey lack strong confidence that they could pass an independent AI governance audit within 90 days. Organizations that are deploying AI can't show how decisions are made and who is accountable for the outcome. This is the AI proof gap.

 

The disconnect has a price. Organizations with fully integrated AI are nearly four times more likely to report revenue growth than those still piloting — 58% versus 15%. The difference is not just technology. It is accountability. The leading organizations can show how their AI makes decisions, who owns the outcomes, and what happens when something goes wrong. For most, a lack of C-suite alignment is slowing progress and escalating risks. Inside many organizations, COOs overseeing AI-affected operations are discovering governance gaps that CFOs are not funding and that CIOs and CTOs are not surfacing. The result is AI scaling without anyone accountable for what it produces.

 

This report examines four dimensions of the proof gap — governance, strategy, workforce readiness and agentic AI risk — and the measurable performance gap between organizations closing it and those falling further behind. Together, these insights show where real value is being created and how leaders can capture it.  The organizations pulling ahead have built governance that gives their leaders the confidence to scale AI decisively. The rest are inheriting risks they cannot see and outcomes they cannot prove. Where the gap is widening, how it is affecting performance, and what it takes to close the performance gap is what follows.

 
“AI deployment has outpaced the infrastructure to defend it. Leaders who have invested in governance aren't moving slower — they are moving faster, because they have the confidence to scale. The ones who haven't built it yet are one incident away from a much harder conversation.”
Tom Puthiyamadam
Managing Partner, Advisory Services
Grant Thornton Advisors LLC
Tom Puthiyamadam headshot
 
 

GOVERNANCE

 

 

Management is moving fast.

Oversight hasn't caught up.

 
%

of executives identify governance as the function most needing focus to meet their AI ambitions — even though 46% cite governance failures as a leading cause of underperformance.

%

of boards have approved on major AI investments — yet 48% have not set AI governance expectations and 46% have not integrated AI risk into ongoing oversight.

 
 

Boards are giving AI the green light, but many are not asking what happens if something goes wrong. Three in four boards have approved major AI investments, but fewer than half have set governance expectations, and fewer than half have made AI risk a standing agenda item for board or committee oversight.

 

Most governance models were not built for the volume of AI use cases organizations are now deploying. Centralized review bodies get overwhelmed as use cases multiply, creating bottlenecks that slow the business without actually reducing risk. Organizations that develop stronger governance adopt AI faster. Among organizations still piloting AI, only 7% are very confident they could pass an independent AI governance audit in 90 days, unlike those with fully integrated AI, where 74% are very confident.

governance audit in 90 days, unlike those with fully integrated AI, where 74% are very confident.

 

Organizations are moving through discovery and deployment unable to show that AI is working safely, defensibly and at the scale the business requires. Each ungoverned initiative does not just create one gap. It creates a gap that makes the next initiative harder to govern, harder to measure, and harder to defend. The proof gap does not grow linearly. It compounds.

The proof gap is real and it is measurable. The question is what separates the organizations that can prove their AI works from those that cannot. The answer, revealed consistently across the survey data, is governance. Not governance as most organizations practice it. Governance built as a performance system.

 

Governance and growth metrics rise with integration stage

 

 

Source: Grant Thornton’s 2026 AI Impact Survey, n = 950

 

Note: None of the 28 “early AI exploration” stage respondents were “very confident” they could pass an independent AI governance audit. Proof at the earliest stages is not low. It is nonexistent. Organizations do not drift into governance confidence. They build it deliberately. The gap between piloting and fully integrated is tenfold.

 
 

Without strong governance, piloting and scaling produce activity — not outcomes. Every gap compounds the next.

 
 
 

STRATEGY

 

 

Strategy drives AI ROI.

Three in four haven’t built one.

 
%

say strategy is the biggest driver of ROI.

%

of operations leaders do not have a fully developed and implemented AI strategy.

 
 

Organizations are succeeding on breadth with more pilots, more use cases, more functions touched by AI, but they are failing on depth. In our survey, business leaders identified competitor moves as the biggest external pressure driving adoption. Many are motivated by the fear of falling behind rather than a clear, practical view of where AI creates value for their specific business model.

 

Closing the gap requires discipline, not just vision.

 

Building measurement targets and governance infrastructure enables teams to move faster with confidence. That means consistent ROI measurement across initiatives, feedback loops that inform where the next investment should go, and the courage to exit experiments that are not delivering. It also means starting where the evidence is easiest to build.

 

 

One in two image
One in two image
 
1in2

operations leaders say they need formalized AI strategy or governance to improve in the next six months.  The organizations that move now are already pulling away. Planning to build a strategy is not the same as building one.

 
 

When AI strategy doesn’t connect visions to outcomes, the gap emerges in the distance between the details.

 
 
 

WORKFORCE

 

 

AI is speeding ahead.

The workforce isn’t ready.

 
%

of CIOs/CTOs say the workforce is fully ready to adopt AI

%

of COOs say the workforce is fully ready to adopt AI

 
 

AI performance isn't matching aspirations because workforce readiness is lagging. Training is disconnected from workflows and tasks, leaving employees without the role-specific guidance they need. A leadership misalignment is partly to blame. CIOs and CTOs are five times more likely than COOs to say the workforce is ready to adopt AI. That gap reflects two different views of the same organization, and the distance between those two perspectives leads to a disconnect on moving the workforce forward toward organizational goals and optimizing AI performance.

 

The problem starts at the top.

 

AI performs when leaders redesign how work actually happens — operating models, workflows, performance management. Awareness training moves people to the starting line. Most organizations stop there. Training is the most underfunded AI investment area in the survey, with 34% of finance leaders saying it isn't getting enough. The organizations pulling ahead are equipping their workforce to actively use AI tools within specific, defined workflows with clear rules about usage and accountability.

 

Organizations are pouring money into AI while underfunding the people side: change management, training and process redesign. They are treating AI like an IT project. The result is a workforce watching AI arrive without knowing what to do with it. Only 6% of executives say change leadership and workforce enablement is a top skill needed essential for thriving in an AI-driven environment. That number is the problem. The survey reflects how far most organizations are from providing employees with the role-oriented, process-specific AI application guidance that they need. Frontline employees (37%) and middle managers (30%), the people closest to AI in daily operations, were identified as needing the most support to implement AI.

 

Data and systems also significantly contribute to AI value leakage. Insufficient data readiness is the third-leading cause of AI underperformance, and 55% of CIOs/CTOs report that fewer than half their core applications are AI-ready. At the same time, just 40% of organizations are well-prepared to handle the privacy and security challenges AI creates. These workbench limitations compound when technology investments are disconnected from workforce needs and unattached to specific workflow challenges.

AI training must reinforce redesigned work, not just tool use. It requires changing habits, decision-making and workflows at the role level so AI becomes part of how work gets done, not an add-on to how it was done before.

 

Leaders need to identify the right problems, scope them precisely, and apply AI with discipline within each specific domain, as they would with any prior adoption of enterprise technology — beginning with data readiness and careful systems design, supported by comprehensive change management and clear accountability.

 
 

AI is a change management initiative, not an IT project. The gap grows if you don’t prepare the workforce for change.

 
 
 

RISK

 

 

Agentic AI is accelerating.

Most aren’t prepared for its failure.

 
%

of business leaders are giving agentic AI access to their data and processes

1in5

have a practiced response plan for AI failures

 
 

Nearly three in four organizations are giving agentic AI access to their data and processes — piloting, scaling or running it in production. Just 20% have a tested AI incident response plan for when it fails. The few organizations that have built governance into how AI operates are able to scale with confidence. Others remain limited in how far they can apply it.

 

Most organizations are not yet permitting fully autonomous decision-making: Only 5% allow agents to execute high-stakes decisions without human review, and 60% limit agents to moderate-risk task automation. But even at those levels, governance infrastructure has not kept pace, and C-suite misalignment is a contributing factor.

 

More than half (54%) of COOs are concerned about regulatory and compliance uncertainty related to agentic AI ¬— compared with just 20% of CIOs/CTOs.  That gap in concern is itself a risk. When the people deploying the technology aren't worried about what the people running operations are worried about, control breaks down.

 

Tested AI incident response protocols are a critical governance tool.

 

The harder shift is structural. Governance needs to move from static policy to continuous oversight: monitoring agent behavior, detecting deviations and adjusting controls as systems evolve. Organizations that build that capability now will scale agentic AI without increasing their exposure.

The question is no longer whether your organization will experience an agentic AI failure. It is whether you will be able to explain it when you do. Most cannot — yet. The infrastructure already exists: nearly every organization has built these capabilities for cybersecurity. The elements translate directly to AI.

 
 

If autonomy outpaces scrutiny, AI agents can turn the gap into a chasm.

 
 
 

PERFORMANCE

 

 

Governance delivers performance.

The leaders reap the benefits.

 
 

The data is clear.  Organizations that built governance first, prepared their workforce before demanding ROI, and had the discipline to stop what wasn't working are outperforming their peers across every measure.

 

Measurable benefits by integration stage

 

 

Source: Grant Thornton’s 2026 AI Impact Survey, n = 950

 

Note: These are not different types of organizations. They are the same organizations at different stages of the same journey — and the difference in outcomes is the cost of the proof gap. The leaders did not get there by accident. They built the infrastructure first.

 
 

Governance built early enables the confidence to scale. Every week it is deferred, the gap widens.

 
 
 

CLOSING THE GAP

 

 

Defensible AI delivers results

— but it takes discipline to build

 
 

The proof gap is an accountability problem. Boards approved investments without setting governance expectations. Leadership deployed AI without defining who owns the outcomes. Organizations scaled without building the infrastructure to prove any of it works.

 

The organizations closing the proof gap are not waiting for better technology, a regulatory mandate or an incident to force the issue. They are building governance now, and the gap between them and the rest is already measurable in revenue, efficiency, and innovation. It will not close on its own.

 

Key steps
  1. Step 1: Build governance as a performance system

    Organizations with fully integrated AI are 10 times more likely to pass an independent governance audit — and almost four times more likely to report revenue growth, according to our survey. Every week governance is deferred, the gap widens.

  2. Step 2: Close the C-suite alignment gap before it closes you

    CIOs, COOs and CFOs see the organization differently. Until leadership shares a common definition of AI readiness, accountability and risk management, workforce investment will underperform and agentic deployments will scale without the controls to contain them.

  3. Measure what is working — and exit what is not

    Only 22% of operations leaders are working with a fully developed and implemented AI strategy. The organizations pulling ahead are not scaling more pilots — they are scaling fewer, with better measurement and clearer exit criteria. Depth creates the outcomes that justify the next investment.

 

Are you in the AI proof gap?

 

5 questions every executive must answer

 

 

AI proof gap self-assessment questions

  1. Do your leaders share a common definition of AI success, risk and accountability?
  2. Can you consistently measure ROI across your AI initiatives and identify which ones should scale or stop?
  3. Have you defined where AI should act autonomously, where human oversight is required, and who is responsible for outcomes?
  4. Could you produce auditable evidence of how your AI systems make decisions today?
  5. If an AI system failed tomorrow, do you have a tested response plan and the ability to trace what went wrong?
 

If you answered “no” to any of these questions, you are in the AI proof gap.

 

Our report shows what organizations closing the gap are doing differently.

 
 

Download the full report

 

Get a deeper analysis of the AI proof gap and how leading organizations have bridged it

 
 
 

Methodology

 

Between February 23 and March 18, 2026, Grant Thornton conducted a survey of 950 business leaders across 10 industries. Respondents were drawn from senior leadership, including C-suite executives (CEOs, CFOs, COOs, CIOs/CTOs) and leaders reporting directly to the C-suite.

 

Respondents came from asset management (N=100), banking (N=50), construction/real estate (N=100), energy (N=100), insurance (N=100), manufacturing (N=100), media and entertainment (N=100), private equity fund leadership (N=100), services (N=100) and technology and telecommunications (N=100).

 

Functional representation came from operations (390), finance (313), IT (234) and CEO/managing partner (13).