CFO MovesFor CFO

AI Reliance Reshapes Human Decision-Making, Wharton Researchers Warn Finance Leaders

Wharton researchers warn that AI adoption may erode human judgment skills in finance decision-making

The Ledger Signal | Analysis
Verified
0
1
AI Reliance Reshapes Human Decision-Making, Wharton Researchers Warn Finance Leaders

Why This Matters

Why this matters: CFOs face a paradox—they're pressured to adopt AI faster while remaining accountable for decisions those tools influence, risking atrophy of critical human reasoning abilities.

AI Reliance Reshapes Human Decision-Making, Wharton Researchers Warn Finance Leaders

Artificial intelligence tools are fundamentally altering how professionals make decisions, raising questions about the long-term impact on human intuition and reasoning skills, according to new research from Wharton School professors Gideon Nave and Steven Shaw.

The findings, published February 24 in Wharton's "Ripple Effect" podcast series, come as finance departments accelerate AI adoption for everything from forecasting to fraud detection. But the researchers' central concern isn't whether AI works—it's what happens to the humans using it.

"As we increasingly rely on AI tools, we must ask: How does this impact our decision-making processes?" Nave, a Wharton professor specializing in behavioral science, said in the 15-minute discussion with Shaw, who focuses on AI and technology integration.

The timing matters for CFOs navigating a peculiar paradox: their teams are being told to adopt AI faster while simultaneously being held responsible for decisions those tools influence. The researchers' work explores this tension, examining how AI integration changes the fundamental nature of human judgment rather than simply augmenting it.

The conversation arrives amid a broader debate in corporate finance about AI's role. Finance leaders have spent the past year testing AI tools for tasks like cash flow forecasting, variance analysis, and scenario planning. But Nave and Shaw's research suggests the more interesting question isn't whether these tools improve accuracy—it's whether they're quietly rewiring how finance professionals think.

Here's the thing everyone's missing: the impact isn't just about getting better answers. It's about what happens to your ability to generate answers when the AI isn't available. (Think of it like GPS navigation—incredibly useful until your phone dies and you realize you've lost the ability to read a map.)

The researchers' focus on behavioral science is particularly relevant for finance functions, where decisions blend quantitative analysis with judgment calls about risk, timing, and strategic priorities. These are precisely the domains where AI assistance could either enhance human capability or create a dependency that erodes it.

The research doesn't appear to offer simple prescriptions—there's no "use AI this way but not that way" framework. Instead, Nave and Shaw are documenting a shift that's already underway, one that finance leaders are living through in real time as they deploy AI tools across their organizations.

For CFOs, the practical question is immediate: as AI tools become embedded in financial planning, close processes, and decision support systems, how do you maintain the human judgment that boards and investors still expect? The researchers suggest this isn't a problem to be solved but a transformation to be understood and managed.

The podcast is part of Wharton's ongoing "Future of Finance" series, which has examined AI's regulatory challenges, behavioral investing trends, and banking's evolution. That Nave and Shaw chose to focus on human cognition rather than AI capabilities signals where academic researchers see the real frontier—not in what the technology can do, but in what it does to us.

The research raises an uncomfortable possibility for finance leaders: the same AI tools being deployed to improve decision-making might be subtly degrading the reasoning skills that make those decisions trustworthy in the first place. Whether that trade-off is worth it remains an open question, one that every CFO will need to answer for their own organization.

Originally Reported By
Upenn

Upenn

knowledge.wharton.upenn.edu

Why We Covered This

Finance teams deploying AI for forecasting, variance analysis, and scenario planning need to understand how tool dependency may degrade their underlying analytical capabilities and judgment—critical for maintaining board and investor confidence.

Key Takeaways
As we increasingly rely on AI tools, we must ask: How does this impact our decision-making processes?
The impact isn't just about getting better answers. It's about what happens to your ability to generate answers when the AI isn't available.
As AI tools become embedded in financial planning, close processes, and decision support systems, how do you maintain the human judgment that boards and investors still expect?
CompaniesWharton School
PeopleGideon Nave- Professor, Behavioral ScienceSteven Shaw- Professor, AI and Technology Integration
Key DatesPublication:2026-02-24
Affected Workflows
ForecastingMonth-End CloseBudgetingReporting
D
WRITTEN BY

David Okafor

Treasury and cash management specialist covering working capital optimization.

Responses (0 )