Blue Sage Data Systems
A use case we run for Omaha insurers

AI for claims processing — Omaha

Submission triage, FNOL summaries, status correspondence, severity tagging — drafted by AI, approved by an adjuster. The work that runs Mutual of Omaha-class claims operations, faster and cleaner.

Lincoln companies asking the same? See the Lincoln view →

Text Rosey · Schedule a call →

The workflow, end to end

What goes in, what the AI does, what comes out, what your team gets back.

Input
Adjuster notes + claim file + policy + loss runs
Work
Draft FNOL summaries and status correspondence with severity tags; flag claims requiring escalation; cross-reference policy coverage
Output
Adjuster-ready letters and FNOL packets in the queue, with severity tagged for triage
Saved
15–25 minutes per claim touch; 30–60 minutes saved on complex FNOL packets

What this looks like in production

Claims processing has the canonical shape of an AI-suited workflow: high-volume, document-heavy, repetitive across cases, with consequential decisions that legitimately need a human adjuster's judgment at the end. The right architecture is AI as drafter, adjuster as approver — not AI as decider.

In production at an Omaha-class insurer, the workflow looks like this. The adjuster's notes and the claim file enter a workflow. AI summarizes the claim, drafts an FNOL packet or status letter, applies severity tags based on policy and loss-run analysis, and routes to the right queue. The adjuster reviews the draft, edits where needed, and approves. The letter goes out under the adjuster's name, with the AI-drafted version archived as the source.

The result is roughly 15–25 minutes saved per claim touch — without any change to the adjuster's judgment, accountability, or final say. NAIC's AI Model Bulletin (Nebraska adopted as IGD-H1, June 2024) requires this human-in-the-loop pattern for AI use in claims, with the AIS Program governance documenting the architecture.

How we run it

  1. Two-week diagnostic with claims operations leadership. Map the actual claim flow, the documents, the volumes, the bottlenecks. Identify where adjuster time is being consumed by drafting vs. judgment.
  2. Build inside the real claims system — the AMS, the document repository, the correspondence templates. No sandbox; production from week 3.
  3. Pilot with a small named adjuster group. Two weeks of side-by-side use — adjuster handles the claim normally, AI drafts in parallel, adjuster compares.
  4. Tune severity tagging against your actual claim taxonomy and your senior adjusters' calibration. The tags drive triage; getting them right is the difference between savings and noise.
  5. Roll out to full claims org with manager-led training. AI use policy + AIS Program documentation in place before broad rollout.
  6. Set up the audit trail: every AI-drafted document is archived with its source data and the adjuster's edits. Examination-ready.

Common questions

Does this comply with NAIC IGD-H1?
Yes — when implemented with the architecture above. NAIC IGD-H1 (Nebraska, June 2024) requires a written AIS Program for AI use in claims (per NAIC Model Bulletin §1 scope). Human-in-the-loop drafting with adjuster approval, audit trail of every AI-drafted document, and the AIS Program governance documentation are the load-bearing elements. We deliver all three as part of the engagement.
Will this replace adjusters?
No — and the data supports that. SHRM 2026 found AI's organizational impact is 5.7x more likely to shift job responsibilities than displace jobs (in the deployed-AI subset). What changes is the adjuster's day: less time drafting, more time on consequential decisions, complex claims, customer relationships, and exceptions.
What about agentic AI for claims?
The right move for most Omaha insurers is assistive AI first — the architecture above. Fully agentic claims processing (where AI takes a sequence of actions through closure without per-step human review) is a 12–18 month future for most mid-market carriers, and the governance work has to land first.
How does severity tagging work?
We tune the model against your historical claims and your senior adjusters' calibration. The tags surface in the queue alongside the AI-drafted documents, with confidence scores. Adjusters use the tags for triage; the AI doesn't make the routing decision autonomously.
What happens when AI gets a claim wrong?
The adjuster catches it at review — that's the architectural design. Mistakes show up in three ways: incorrect severity tags (caught during triage), factual errors in drafted letters (caught during review), or routing recommendations that don't match the actual claim shape (caught during the queue assignment). All three are observable in the audit log; we tune monthly based on adjuster corrections.

Sources

Related

→ Start here

Text Rosey to begin.

Rosey is our executive-assistant bot. Text the number below — she'll ask two questions, offer three calendar slots, and put a 30-minute call on Jim's calendar.

Text Rosey · Schedule a call →

or call 415 481 2629