Compliance Chiefs Grapple With AI Agent Hype as Castellum CEO Maps Regulatory Reality
The financial services industry's rush to deploy AI agents for anti-money laundering and compliance work is colliding with a sobering question: what do regulators actually want to see before they'll bless these systems?
Peter Piatetsky, cofounder and CEO of Castellum.AI, spent nearly an hour on the Fintech Business Weekly podcast this week walking through that exact problem. His company builds AI agents for AML and KYC workflows, which means he's living in the gap between what the technology can theoretically do and what compliance officers can actually deploy without getting hammered in their next exam.
The conversation, published January 21st, covered the practical mechanics of evaluating agent-based solutions—a topic that's moved from "interesting" to "urgent" for compliance leaders over the past six months. Piatetsky and host Jason Mikula dug into model governance frameworks, examiner-ready controls, and how to translate vague regulatory signals into documentation that survives scrutiny.
Here's the thing everyone's dancing around: regulators haven't published a playbook for AI agents in financial crime compliance. They've published principles, guidance documents, and the occasional enforcement action that hints at expectations. What they haven't done is tell banks and fintechs exactly how to document agent decision-making in a way that satisfies examiners who are themselves still figuring out what questions to ask.
Piatetsky's pitch is essentially that Castellum has reverse-engineered those expectations by working directly with compliance teams. The interview focused heavily on governance—not the sexy part of AI, but the part that determines whether your general counsel lets you turn the thing on. How do you validate an agent's output? How do you audit its reasoning? How do you prove to an examiner that the agent isn't just a black box making consequential decisions about suspicious activity reports?
The broader context here is that "agentic AI" has become the hottest phrase in fintech compliance circles, which is saying something given how many hot phrases cycle through this industry. Unlike previous waves of automation, agents are supposed to handle multi-step workflows with minimal human intervention—not just flagging transactions, but investigating them, pulling supporting documentation, and drafting narratives. That's a fundamentally different risk profile than rules-based systems or even earlier machine learning models.
What makes this interview notable is the focus on implementation reality rather than capability theater. Piatetsky discussed how Castellum's agents actually power AML and KYC workflows today, not in some future demo. The conversation also touched on how compliance programs will likely reshape over the coming years as agent technology matures—assuming, of course, that the regulatory framework catches up.
For CFOs watching their compliance budgets, the subtext is clear: AI agents promise efficiency gains in some of the most labor-intensive parts of financial crime compliance. But the path from promise to production involves navigating regulatory uncertainty, building governance frameworks that don't yet have templates, and convincing examiners that your controls are adequate for technology they're still learning about themselves.
The full episode runs 52 minutes and is available on Apple Podcasts and Spotify. For compliance chiefs trying to separate signal from noise in the AI agent space, it's probably worth the listen—if only to hear someone acknowledge that the hard part isn't the technology, it's everything else.


















Responses (0 )