AI Tutors Create "Cookie Jar Problem" for Corporate Training Programs, Wharton Finds
Finance leaders investing in AI-powered learning tools face an uncomfortable paradox: the technology designed to accelerate employee skill development may actually be undermining it, according to new research from Wharton that raises questions about how companies deploy generative AI in training environments.
The issue isn't the technology itself—it's human nature. Wharton professor Hamsa Bastani describes the problem as a "cookie jar" dynamic: employees know that over-relying on AI assistance harms their long-term learning, but self-regulation proves difficult when help is always one click away. The research, published February 24, shows that on-demand AI assistance can erode the practice and "productive struggle" necessary for genuine skill acquisition, even when learners understand the trade-off.
For CFOs overseeing training budgets and workforce development initiatives, the findings complicate what seemed like a straightforward efficiency play. Many finance organizations have rolled out AI tutors and coding assistants to accelerate technical upskilling, particularly as teams grapple with new data analytics requirements and automation projects. The implicit assumption: more support equals faster learning. Bastani's research suggests the opposite may be true.
The mechanism is deceptively simple. When AI assistance is freely available, learners take the path of least resistance—asking the AI to solve problems rather than wrestling with them independently. This pattern short-circuits the cognitive effort that actually builds competence. The learner completes more exercises and feels productive in the moment, but retains less and struggles when the AI crutch disappears.
What makes this particularly thorny for corporate training programs is that employees aren't acting irrationally. They face real time pressures and competing priorities. An analyst trying to learn Python while closing the quarter will rationally choose the AI shortcut that gets the immediate task done, even if it sacrifices long-term skill development. The individual incentive structure undermines the organizational goal.
The research arrives as finance departments dramatically increase spending on AI-enabled learning platforms. The technology promises to personalize instruction, provide instant feedback, and scale expertise across large organizations—all attractive propositions for functions trying to modernize technical capabilities without proportional headcount growth. But Bastani's findings suggest that deployment strategy matters as much as the technology itself.
The "productive struggle" that AI assistance can eliminate isn't a bug in the learning process—it's a feature. The difficulty of working through a problem independently, making mistakes, and iterating toward a solution is precisely what builds durable skills. Remove that friction entirely, and you risk creating employees who can operate with AI support but lack fundamental competence without it.
For finance leaders, this creates a design challenge. The question isn't whether to use AI in training, but how to structure access in ways that preserve the learning mechanisms that actually work. That might mean time-delayed assistance, limited daily queries, or AI tools that guide rather than solve—interventions that feel counterintuitive when the technology can provide instant, perfect answers.
The broader implication extends beyond training programs to how finance functions think about AI augmentation generally. If employees become dependent on AI assistance for routine tasks, what happens to baseline competency? When the system goes down or produces an error, can the team still function? These aren't hypothetical concerns—they're workforce planning questions that belong in the same conversation as automation ROI.
Bastani's cookie jar metaphor captures the essential problem: knowing something is bad for you doesn't make it easier to resist when it's readily available. Finance leaders building AI-enabled training programs will need to account for that human reality, not just the technological capability.


















Responses (0 )