Blue Sage Data Systems
banking · Banking · February 11, 2025

BSA/AML Alert Triage Is Not the Place for Autonomous AI

BSA/AML alert triage is not the place for autonomous AI. It's a place where AI can save investigator time without taking the call.

Migrated from earlier notebooks

A BSA officer at a community bank in western Nebraska manages a team of two full-time investigators and covers a third position herself when caseload spikes. Alert volume has been climbing for two years — not because the monitoring system got worse, but because the bank’s commercial book grew. The investigators are working systematically, but the queue is deeper than it used to be.

The bank isn’t looking for AI to close cases. It’s looking to recover the two hours per case spent gathering background the investigator already knows how to interpret: transaction history, account profile, prior case notes, relevant typology documentation. The assembly work is the problem, not the analysis.

Why “AI for compliance” goes wrong fast

The unreasonable versions of “AI for compliance” share a common shape: they propose to automate the decision at the end of the triage process, not the data gathering at the beginning.

Automating the SAR filing decision is not a reasonable application of AI in a community bank context. A SAR filing is a legal determination. The investigator’s analysis requires human judgment and human accountability. Examiners expect that judgment to be documented and attributable to a named person, not to a model. Institutions that try to route around that requirement don’t save compliance resources — they create regulatory exposure.

What goes wrong is conflating the data-gathering step with the judgment-and-decision step. Data retrieval and organization: AI handles well. Analysis and accountability: requires a licensed professional with legal obligations. Mixing them up produces tools examiners scrutinize and investigators don’t trust.

The supervised-triage pattern (AI drafts, investigator decides)

The pattern that works is narrow on purpose.

When an alert fires, the investigator’s first task is building the case file: 180-day transaction history, the account profile and CIP record, prior alerts or cases for the same customer, relevant FinCEN typology documentation, and prior interaction notes. In theory, a modern core with a compliance case management platform has all of this in one place. In practice, it’s spread across two or three systems, and pulling it together takes 45 to 90 minutes per alert.

The AI step handles that assembly. When the alert opens, the pipeline pulls the relevant data, organizes it into a structured case summary, and presents it alongside the alert. The investigator opens a case with the transaction history formatted, the account profile current, and prior case notes surfaced — instead of spending the first hour building context from scratch.

Then the investigator does the investigation. She reads the assembled file, evaluates whether the activity is consistent with the customer’s stated business, and makes the determination. The AI drafted the context. The investigator made the call. That sequence — AI drafts, investigator decides — is the only one that holds up under examination.

Audit logging as a first-class citizen

In a BSA/AML context, the audit trail is not an afterthought. It’s the product.

Every step needs to be logged and retrievable: what the pipeline pulled, when, what the investigator received, what she changed, when she approved the summary, what determination she made. That log is the evidence that the compliance process worked as designed when an examiner asks how a SAR decision or no-file decision was made.

The case management platform needs to record the AI-generated draft as a versioned artifact — preserved alongside the investigator’s edits, not overwritten by them. The determination and sign-off must be attributed to a named individual, not to a process.

If the pipeline the bank is evaluating doesn’t surface these questions at the design stage, slow down. Audit logging in a compliance workflow isn’t a feature to add later. It belongs in the initial build scope.

What “investigator time saved” actually means in practice

Time savings come from two places: case-file assembly and second-look efficiency.

Case-file assembly moves from 45 to 90 minutes down to 10 to 15. The investigator still reviews the assembled file — she doesn’t rubber-stamp it — but reviewing is faster than building. That’s where the recoverable time lives.

Second-look efficiency develops over the first 90 days. The investigators build confidence in what the pipeline assembles reliably and what it misses. The review becomes more targeted. That calibration reduces review time further without reducing scrutiny.

For a team of two investigators processing 40 to 60 alerts per month, the recovered time typically runs 15 to 25 hours per investigator per month. That range reflects what a well-built pipeline produces; actual results depend on core system query speed, case management integration depth, and how well the investigators document their own process for the build team to calibrate against.

The metrics we’d watch month one

Five numbers in the first 30 days:

Case-file assembly time. Baseline: what investigators were logging before. Target: significant reduction in median time from alert open to investigator first analysis.

Investigator correction rate. Field-level corrections (a date range) are calibration issues. Structural corrections (the pipeline pulled the wrong account’s history) are design issues. Month one is when these patterns surface.

Alert closure time. Total time from alert open to case close. This is the workload management number the compliance team tracks.

No-file rate. Should be stable relative to the pre-build period. A significant shift in either direction means the pipeline is surfacing or obscuring context that’s affecting determinations.

Examiner-ready documentation rate. Can every closed case produce a complete audit trail on demand? This should be 100% from day one. If it isn’t, fix that before tuning anything else.

The BSA officer and compliance committee should review all five monthly for the first quarter. The compliance team owns the determination — the metrics are how the institution knows the tool is supporting that work rather than distorting it.

For more on how Blue Sage approaches compliance-adjacent workflows in community banking, see the banking practice.

→ Start here

Text Rosey to begin.

Rosey is our executive-assistant bot. Text the number below — she'll ask two questions, offer three calendar slots, and put a 30-minute call on Jim's calendar.

Text Rosey · Schedule a call →

or call 415 481 2629