September 9, 2025
Sanctions compliance moves fast. Adversaries move even faster. Over the past eighteen months, artificial intelligence has shifted from being "interesting" to operational in sanctions evasion, especially where proliferation financing (PF), maritime obfuscation, and cross-border procurement intersect.
The Financial Action Task Force (FATF) recently published a report on Complex Proliferation Financing and Sanctions Evasion Schemes, which offers a crucial baseline. In the related FATF webinar, during the Q&A the FATF noted that many questions were received regarding AI. One panelist noted that AI "has begun to play a role" in organized sanctions evasion while emphasizing that the available evidence is limited and that it is "difficult to draw a conclusion." The panelist pointed to potential uses in automatic obfuscation and, within the cryptocurrency and blockchain ecosystem, in enhancing mixing and anonymization, which is one reason FATF views AI as an emerging area for close monitoring. That acknowledgement matters: it also highlights the gap. As Aaron Arnold argues, describing AI as merely "emerging" risks understating how quickly illicit actors are already deploying it.
Sanctions evasion has always relied on deception: front companies, false paperwork, circular routing, and opportunistic intermediaries. AI is a force multiplier for all of it. It does not invent new crimes so much as industrialize old ones and facilitate them - by making identity fraud more convincing, documentation more consistent, network behavior more adaptive, and by rapidly disseminating specialized knowledge and expertise.
Identity and workforce deception are easier to scale. North Korean IT operatives have been reported using AI face-swap techniques in remote interview to obtain contract work under assumed identities. That means fresh income streams as well as access, and both are directly relevant to sanctions implementation and export control risk.
"Synthetic" persons and entities now look and feel real. Generative tools can assemble credible dossiers, from IDs, bios, corporate websites, and filings, all with coherent digital exhaust. At volume, these clusters overwhelm KYC / CDD and frustrate link analysis. Each persons carries a distinct, plausible, digital footprint and correlation checks often fail.
Documentation and routing can be optimized, not just faked. Large language models (LLMs) can be aimed at tariff schedules, licensing rules, and trade corridors to find low-friction pathways for misclassification and rerouting of sensitive goods, and then pair this with internally consistent shipping paperwork that avoids detection in reviews.
Add to that the maritime domain, where long-standing tactics (AIS spoofing, identity tampering, ship-to-ship transfers) are being paired with more convincing narratives of movement and intent. The result is not a brand new typology, but a far more convincing version of the old one: "ghost ships", simulated fleet behavior, and masked port calls tied to sanctioned programs, making such activity more complex to identify and interdict.
FATF is right: evidence lags fast-moving threats. But policy and supervision often wait for "enough" data while adversaries exploit this window of opportunity. We have seen this pattern before, from early crypto thefts to deceptive shipping practices, where criminal capability outpaced supervisory capacity until oversight catches up. In PF specifically, effectiveness scores remain uneven globally. If authorities and institutions treat AI as tomorrow's topic, they risk calibrating controls to yesterday's scheme variants while today's attacks continue to scale with limited visibility.
There is also a structural asymmetry. Adversaries do not face procurement cycles, legacy system constraints, or privacy law hesitations. Many competent authorities and financial institutions do. We respect governance and rights, but this also means we must design counter-AI controls that work within those constraints - and do so, quickly.
This is not a "how-to" for criminals: it is a call to act with urgency. Three questions every competent authority and institution should be able to answer "yes" to, soon, are:
Have we named the risk and trained against it? Put Ai-enabled evasion into examiner curricula, customs and export control training, FIU analytics, and supervisory outreach. Train staff to recognize deepfake-assisted onboarding, synthetic corporate families, AI-aided trade misclassification, and maritime identity tampering. Use controlled test artifacts and scenarios, for example, ethically generated deepfake videos and synthetic documents, to train and evaluate staff detection skills without enabling misuse.
Have we updated our PF Risk Assessments and sector guidance? Update PF Risk Assessments and sector guidance to explicitly address AI-amplified behaviors. Do not wait for perfect case studies: document plausible, evidence-based scenarios, align indicators, and set expectations that supervised entities will demonstrate control changes, not merely awareness.
Are our detection capabilities fit for purpose and governed? Modernize detection with guardrails: blend rules with behavioral and graph analytics for sanctions screening, beneficial ownership resolution, vessel network analysis, and trade-based risk. Prioritize model governance and testing against adversarial manipulation. Where possible, expand public-private information sharing to include AI-specific artifacts, including deepfake signatures, clusters of synthetic IDs, and AIS anomaly patterns.
FATF's webinar made a useful point: the publicly available evidence on AI in sanctions evasion is still developing. We agree, and we also think that waiting for a perfect dataset is how you lose the first round of a technological shift.
In the terrorism financing (TF) domain, UN and FATF analyses have already begun to integrate AI into the risk picture. The 1267 Analytical Support and Sanctions Monitoring Team has reported experimentation with artificial intelligence and cautioned that AI may sharpen recruitment and propaganda, particularly among younger audiences. A subsequent Monitoring Team report also notes the use of AI-created fake documentation to circumvent KYC procedures at onboarding - a technique PF networks could transpose to procurement and vendor onboarding - and documents experimentation with AI for multilingual propaganda, guidance on using generative AI tools while avoiding detection, and efforts to recruit cyber specialists. FATF's July 2025 Comprehensive Update on Terrorist Financing Risks also notes how AI-driven recommendation systems on social platforms can amplify extremist content and related fundraising pathways.
By contrast, in PF, FATF's June 2025 study references AI only briefly as an "emerging" technology with limited case evidence, despite the fact that AI can accelerate the familiar PF toolset: front companies, falsified documentation, procurement layering, and maritime obfuscation. The PF discourse is therefore lagging the TF discourse at precisely the moment when digital tactics are scaling. That lag is risky precisely because PF networks touch the procurement, trade, logistics, and finance domains at once - domains where AI readily scales speed and deception.
The urgency is not theoretical. Recent U.S. law enforcement commentary underscores GREX Strategies' approach: Acting Assistant Attorney General Matthew Galeotti has warned that North Korean operators are "using Americans as their personal piggy banks", a reminder that illicit IT work, cyber-enabled theft, and sanctions evasion are converging in ways that can fund WMD programs.
Taken together, these signals argue for treating AI as a cross-cutting amplifier in PF risk assessments, supervisory examinations, and public-private information sharing, now. If PF frameworks wait for a catalog of perfect cases, controls will calibrate to yesterday's behaviors while AI-enabled networks expand in capacity and agility.
We see AI as a "dual use" reality for compliance: the same tools that harden our detection can harden an adversary's deception. The job now is to move from interest to implementation: to train people, tune models, and rewrite strategies so they anticipate AI-shaped behavior, not just react to it.
International standard-setters and regional bodies: Prioritize integrate AI-enabled evasion into PF follow-up and issue near term guidance addenda on model governance and public-private analytics.
Competent authorities: fund AI-focused training, publish AI-specific PF red flags, and convene targeted public-private partnerships across maritime, trade finance, and virtual asset risks.
Institutions: pressure test onboarding, screening, and investigation workflows against deepfake identities, synthetic entities, and AI-aided trade and maritime obfuscation. Demonstrate the uplift.
Describing AI as "emerging" may be technically accurate from a research vantage point, but operationally it is already here. The sooner we act like that is true - and answer the three questions above in the affirmative - the better our odds of keeping pace.
We work with clients around the world. If you would like to discuss a project, request a proposal, or explore how we can help, please reach out directly using the details below.
Phone: +49 1512 9511048
office@grex-strategies.com
www.grex-strategies.com
George Grech