
Introduction
This series is a reflection on the intersection of artificial intelligence, trust, and human responsibility in financial crime compliance. Each part explores a different dimension of how...
Read More
Part 1: “The Mission Meets Its Adversary”
Financial crime is evolving. And it’s evolving faster than most compliance functions are prepared for. While many firms are still defining their AI strategies, criminal networks are...
Read More
Part 2: Trust Isn’t a Capability - It’s a Decision
If Part 1 exposed the adversary, this part asks the harder question: what’s really stopping us from responding? The tech is ready…
Read More
Part 3 - The Risk of Looking Modern While Acting Manual
If Part 2 highlighted the trust and belief gap. This part highlights the uncomfortable truth. We’ve gotten very good at sounding future…
Read More
Part 4: The Loop of Least
Resistance
If Part 3 highlighted the illusion of progress in AI deployment in financial crime compliance. This part the highlights…
Read More
Part 5: When AI Sounds Right but Gets It Wrong
If Part 4 showed how “human in the loop” can swap real assurance for comfort, Part 5 takes it further…
Read More
Part 6: Rethinking Roles in an AI-First Compliance Function
If Part 5 revealed the danger of fluent but flawed outputs, Part 6 highlights the uncomfortable truth, many FCC roles today exist because…
Read More
Part 7: Building Trust, Not Just Tools
If Part 6 focused on redefining roles around AI, Part 7 highlights AI transformation doesn’t fail because of bad tech — it fails when…
Read More
Part 8: Regulators as Intelligent Orchestrators: Evolving from Rules to Trust
If Part 7 showed how trust must be engineered, Part 8 examines how regulators can lead at the speed of the threat…
Read More
Part 9: Slack Tide: The Cost of Waiting
If Part 8 showed the one player who can break the deadlock, Part 9 highlights the cost of waiting and the industry call to action…
Read More
In AI We Trust - Parts 1 to 9
Introduction
This series is a reflection on the intersection of artificial intelligence, trust, and human responsibility in financial crime compliance.
Each part explores a different dimension of how AI is presenting new threats, reshaping our governance models, workflows and most critically, our expectations of human judgment.
This isn’t intended to be AI Hype – It’s a provocation. A call to rethink our purpose, the systems we’ve built, and the assumptions we’ve inherited. It’s about what happens when financial crime evolves, and compliance doesn’t.
Meanwhile, criminals are scaling AI-powered fraud, deepfakes, and synthetic networks in production. They aren’t limited by governance, policy or budget. They’re limited only by imagination.
The uncomfortable truths?
Much of AI in compliance is still stuck in Proof of Concept and not deployed into production
→ Risk appetite and governance delay decisions and deployment—not enable progress.
→ Risk exposure and fraud losses for regulated financial institutions is only increasing as new criminal tactics circumvent legacy controls
But most compliance functions?
Still reliant on humans doing what machines now do better.
Still built to spot what criminals did—not what they’re about to do.
The result?
→ Human effort is poured into low-value tasks.
→ Systems are busy—but not intelligent.
→ We’re burning investment in areas that could be redeployed to more effectively fight financial crime.
We’ve created the illusion of progress.
The reality is—we’ve scaled inefficiency and called it resilience
This paper calls time on that.
It challenges leaders to stop designing around inefficiency and lack of trust in AI and start deploying solutions and building systems that can put AI head to head with what criminals are deploying.
Because the future of compliance isn’t just about cost reduction or replacing people. It’s about elevation, rethinking how we work—when machines can do more, driving better risk management outcomes and – ultimately trust.
And it makes the case for a new FCC model:
Where human judgment is elevated.
Where risk isn’t just monitored—it’s anticipated.
And where trust is not a byproduct—
but the starting point and backbone
If you’re part of the C-suite or a leader in FCC, risk, or governance, this isn’t just about technology. It’s about credibility, control, and whether you’re building systems that are ready for the real world and your mission.
The threat is real. AI and technology are ready. The question is – are you?
This is.. In AI We Trust? Rethinking Compliance, Judgment, and Direction in the Machine Age
Part 1: “The Mission Meets Its Adversary”
Financial crime is evolving.
And it’s evolving faster than most compliance functions are prepared for.
While many firms are still defining their AI strategies, criminal networks are already operationalising theirs.
We’re not just talking about fraud rings. We’re talking about adversaries using large language models, synthetic media and reinforcement learning to bypass detection in real-time.
AI isn’t just a compliance opportunity. It’s now part of the threat model.
Compliance must evolve into active defence, embedding proactive deterrence to predict and disrupt threats, rather than merely responding to them.
Here’s what that looks like in practice:
Synthetic identities and deepfakes
Generative AI is being used to fabricate entire digital personas – names, documents, even full social media histories. Voice clones and video deepfakes are already being deployed to bypass controls.
Most systems can’t spot them. Static checks and document scans don’t pick up on AI-generated nuance.
Criminals are using AI to recruit, manage, and coordinate global money mule networks.
Bots handle communication. Instructions are automated. Transactions are synchronised across accounts and jurisdictions.
Legacy transaction monitoring systems miss the signs. They weren’t designed to detect behavioural coordination – especially when it evolves in real-time across geographies.
Fraud-as-a-service with AI toolkits
On the dark web, plug-and-play platforms offer automated phishing, fake onboarding, credit card testing, and identity spoofing – all powered by AI.
Compliance tools don’t simulate these attacks. They react. Often – Slowly.
Here’s the uncomfortable truth:
Most FCC systems are built to look backward. But AI-powered adversaries are looking forward – constantly adapting, iterating, personalising. They don’t operate under budgets constraints or cycles. They don’t care about policy or governance. They care about what works. And right now, it’s working.
So let’s remember why Financial Crime Compliance exists in the first place:
It’s not just about control frameworks or audit readiness. It’s about human protection. It’s about market integrity.It’s about stopping harm before it happens.
And that mission is now facing its most adaptive, intelligent adversary yet.
Next Up: Part 2: “Trust Isn’t a Capability – It’s a Decision”