Is This Workflow Ready to Automate?
Not every workflow is ready. Here's the checklist we use before recommending automation to a client.
A manufacturing operations director in Omaha calls us in October with a list of seven workflows she wants to automate. The list is good. But three of the seven shouldn’t be automated yet — not because automation is wrong for them in principle, but because the workflows aren’t ready.
Recommending automation before a workflow is ready produces the wrong result: a tool that technically works but doesn’t stick, because the process is too variable, too low-volume, or too judgment-heavy to run reliably without constant intervention. Getting that call right early saves the build cost and the credibility cost of shipping something the team abandons in month two.
The wrong workflows to automate (one-off, judgment-heavy, low-volume)
Three categories consistently fail as automation candidates.
One-off and bespoke. If the workflow happens differently every time — each instance requiring a unique response to unique circumstances — automation adds friction rather than removing it. Custom proposals, contract negotiation, novel regulatory situations: these are problem-solving work. The value is in the judgment, not the throughput.
Judgment-heavy without a clear decision boundary. Some workflows look repetitive but aren’t. A claims adjuster reviewing coverage disputes is doing contextual interpretation that varies case to case. The judgment isn’t a bug to route around — it’s the point. AI can assist (summarize the file, flag missing documentation), but the workflow shouldn’t be automated in the sense of AI making the determination.
Low-volume. Five instances per month don’t support a build. The drag is small; the build cost is also small; the ratio doesn’t favor automation. Low-volume workflows are candidates for process improvement — standardize the inputs, document the steps — not a production pipeline.
The right ones (high-volume, deterministic-ish, well-documented)
The workflows that automate well share three features.
Volume. High throughput makes the time-savings calculation concrete. If coordinators process 40 submissions per week, or analysts pull the same data set from four systems every Friday, the hours are real and the payback is calculable.
Determinism. Not perfect — real processes never are — but close. Given the same inputs in the same format, the right output is almost always the same. ACORD fields map to AMS fields the same way every time. A grain contract memo has a consistent structure regardless of the merchandiser. Workflows with clear input-to-output mappings are the ones AI handles reliably.
Documentation. If no one has written down how the workflow runs — if it lives in one person’s head — the documentation work has to happen before the build. Automation codifies a process. If the process isn’t defined, the automation codifies confusion.
The 6-question checklist
Before recommending a build, we work through six questions:
-
How many times per week does this workflow run? Below 15 to 20 instances per week, revisit the economics. The threshold isn’t a hard cutoff, but below it we’d want to see high dollar risk per instance or significant compliance exposure to justify the build.
-
Can you describe the right output without saying “it depends”? If every answer requires a caveat about who’s handling it, the workflow has more judgment embedded than it appears. That changes the design.
-
Is there an existing system of record for the output? Automation that produces a document you then key into another system has only moved the problem. The build should write to the ERP, the AMS, the core — wherever the work lives.
-
Who owns the exception cases? Every automated workflow produces some wrong outputs. Is there a role designated to catch and correct them? If the answer is “the AI will figure it out,” the workflow isn’t ready.
-
Has this process been stable for at least six months? If it’s being redesigned or expected to change next quarter, build against it now and rebuild it in three months.
-
Can you show us three real inputs from the last 30 days? If not — because inputs aren’t consistently retrievable — that’s a signal about how the workflow is actually documented in practice.
Examples from each anchor industry
Insurance: Commercial submission intake scores well on all six. High volume (30 to 50 per week for a mid-sized agency). Deterministic ACORD-to-AMS field mapping. Clear system of record. Defined exception role. Stable process.
Banking: Loan committee packet assembly scores well on volume and documentation for banks with consistent packet formats. It scores lower on determinism for complex credits with multiple entities — those require a different design than a straightforward CRE deal.
Manufacturing: RFQ intake is a strong candidate at shops processing more than 20 RFQs per week. The 8D drafting pipeline scores well for plants with consistent NCR volume and operators willing to record voice memos.
Agribusiness: Grain deal capture scores well on all six during origination season. Off-season, volume is weaker — worth noting when scoping an engagement that spans the full calendar year.
The bias to not automate
When a workflow scores marginally — high enough to be interesting, low enough to be uncertain — the default recommendation is to hold off and fix the process first.
Fixing the process means: document the steps, standardize the inputs, assign the exception-handling role, run it manually in its better-defined form for 60 days. At the end of 60 days, the checklist scores higher, the examples are cleaner, and the build — if you still want it — is faster and more likely to run well.
The workflows we’ve seen fail in production almost always failed because the checklist was rushed, not because the technology was wrong. Slowing down at the evaluation step is consistently the better investment. Details on how Blue Sage structures that evaluation are in the how we work section.