Book a demo

For full terms & conditions, please read our privacy policy.
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
White plus
Blog Home

58% to 89%: What Gartner and NVIDIA Survey Data Actually Tell Us About AI Adoption in Finance

Evolution AI
Evolution AI
May 8, 2026

Every slide deck about AI in financial services leans on the same two statistics. Gartner says 58% of finance functions have deployed AI. NVIDIA says 89% of financial-services firms credit AI with boosting revenue. One number suggests the industry is still warming up; the other implies the transformation is nearly complete. The gap between them is more instructive than either figure alone.

Two Numbers, Two Very Different Questions

In September 2024, Gartner surveyed 121 finance leaders and found that 58% of finance functions were using AI — up 21 percentage points from 37% the previous year. The framing was cautious: adoption is growing, but data quality and talent gaps remain the top barriers.

Four months later, NVIDIA published its sixth annual State of AI in Financial Services report, surveying more than 800 industry professionals. The headline: 89% of respondents reported that AI increases annual revenue and decreases costs. Active AI usage had jumped to 65%, up from 45% the prior year.

Read quickly, one stat says "hesitation" and the other says "near-unanimity." Read carefully, they are answering entirely different questions.

Reading the Fine Print

Who Was Asked?

Gartner polled a cross-section of CFOs and finance leaders across industries — people whose remit is the finance function specifically, not the broader enterprise. NVIDIA surveyed self-selected financial-services professionals, a population that skews toward firms already investing in GPU infrastructure and AI tooling. An opt-in survey of AI practitioners will always return warmer numbers than a broad poll of finance leaders who may still be running month-end close on spreadsheets.

What Counts as "AI"?

Gartner's definition casts a wide net. Its top use case, intelligent process automation (adopted by 44% of finance functions), bundles robotic process automation and rules-based workflows alongside machine learning. NVIDIA's framing tilts toward generative AI and GPU-accelerated workloads — 61% of its respondents were using or assessing generative AI, and 42% were exploring agentic AI.

When "AI" can mean anything from an automated reconciliation script to a large language model summarising earnings calls, the headline number moves accordingly.

Self-Report and Incentive Bias

Both surveys rely on self-reported data. Academic research on self-report methodology consistently finds that respondents overestimate usage and outcomes, particularly when the topic carries social desirability — and telling a vendor-sponsored survey that your AI investment is paying off carries plenty. This does not make the data useless, but it means treating either number as ground truth is a mistake.

What "Adoption" Actually Looks Like on the Ground

Strip away the headline percentages and a more nuanced picture emerges. The OECD's analysis of 49 jurisdictions finds that AI in finance remains concentrated in specific functions: risk management, fraud detection, sanctions screening, and back-office efficiency. Generative AI, the OECD notes, "is not widely used" among regulated institutions — most deployments focus on automating internal processes rather than customer-facing innovation.

In other words, the majority of firms counted as "adopted" are running narrow, well-scoped use cases — anomaly detection on transaction data, document triage for compliance teams — not end-to-end autonomous workflows. The distance between "we use AI somewhere" and "AI transformed our P&L" is vast.

From Pilot to P&L Impact

The 58-to-89 gap partly reflects the difference between "have deployed something" and "believe it moved revenue." Belief is not measurement. NVIDIA's own data shows 64% of respondents claim revenue growth exceeding 5%, but rigorous attribution models that isolate AI's contribution from other variables remain rare in financial services.

Regulation adds friction. The Federal Reserve's SR 11-7 guidance on model risk management requires banks to validate, document, and govern every quantitative model — including AI systems — before they reach production. That framework exists for good reason, but it means scaling from a successful pilot to an enterprise deployment is slower and more expensive than in less regulated industries. When every model needs independent validation, the gap between "using AI" and "proving AI drives revenue" widens further.

Takeaways for Teams Scoping AI Projects

Benchmark against use-case maturity, not headline adoption rates. Whether your industry is at 58% or 89% tells you nothing about whether your specific workflow is ready for automation.

Demand clarity on what "AI" means in any internal business case. A rules-based automation saving 20 hours a month is valuable. A generative AI pilot burning through tokens with no measurable lift is not. The label matters less than the mechanism.

Build measurement infrastructure before claiming ROI. A/B attribution, cost accounting, and baseline metrics are prerequisites — not afterthoughts. If you cannot measure the impact, you cannot manage it.

Treat survey data as directional, not prescriptive. Your organisation's bottleneck is almost certainly data quality and integration, not model sophistication. Gartner's own respondents named inadequate data quality as their top challenge — and that finding is probably the most reliable number in either survey.

Interested in fast, accurate data extraction from financial statements without the hassle? Financial Statements AI has everything you need. Sign up here for a free trial.

Share to LinkedIn