Compliance Chiefs Navigate Murky Waters as AI Agents Enter AML Workflows
Financial institutions deploying artificial intelligence agents for anti-money laundering and know-your-customer compliance face a regulatory landscape where the rules of engagement remain largely unwritten, according to Peter Piatetsky, CEO of Castellum.AI.
In a podcast interview published January 21st, Piatetsky outlined the practical challenges compliance leaders confront as they evaluate agent-based solutions—autonomous AI systems that can execute multi-step workflows without human intervention at each stage. The discussion centered on what has become one of the most contentious questions in financial services: how to implement agentic AI in regulated functions while satisfying examiners who lack clear guidance from their own agencies.
"The signals regulators are sending are mixed at best," Piatetsky told Fintech Business Weekly's Jason Mikula. The conversation focused heavily on translating those ambiguous regulatory signals into documentation and controls that can withstand examination—a translation exercise that has become critical as more firms experiment with AI agents in their compliance programs.
Castellum.AI, which Piatetsky cofounded, has built its business around deploying AI agents specifically for AML and KYC workflows. The company's approach reflects a broader industry tension: compliance functions are drowning in manual work and false positives, making them prime candidates for automation, yet they operate in one of the most heavily scrutinized areas of financial regulation.
The model governance question looms particularly large. Traditional AI models require validation frameworks that regulators understand—statistical testing, performance metrics, documented decision logic. Agents, by contrast, can adapt their behavior based on what they encounter, making them harder to validate using conventional methods. Piatetsky discussed how firms should approach this governance challenge, though the podcast description suggests the industry is still working out answers rather than settling on standards.
The conversation also touched on how compliance programs might evolve as agent technology matures. The promise is significant: AI agents could potentially handle the repetitive investigative work that currently consumes thousands of analyst hours, allowing human compliance officers to focus on genuinely suspicious activity. The risk is equally significant: a compliance failure involving an AI agent could trigger regulatory action that chills adoption across the industry.
For CFOs weighing whether to greenlight AI agent pilots in their compliance departments, Piatetsky's interview suggests the decision framework should prioritize documentation and explainability over speed to deployment. The technology may be ready, but the regulatory infrastructure around it remains under construction—and compliance leaders are effectively building that infrastructure case by case, examination by examination.
The timing of the discussion is notable. As of early 2026, financial institutions are moving beyond proof-of-concept AI projects and into production deployments, particularly in back-office functions where the cost pressures are most acute. Compliance, with its combination of high costs and regulatory sensitivity, sits at the center of that tension.


















Responses (0 )