Blue Sage Data Systems
A play we run when AI rollouts stall in Omaha

Lead with the people narrative before the technology roadmap

When adoption stalls, the fix isn't a louder rollout. It's answering the question employees are actually asking — what changes for me, what stays the same, what does my role look like in twelve months — before introducing one more tool.

Lincoln companies asking the same? See the Lincoln view →

Text Rosey · Schedule a call →

The pattern

The pattern is consistent across stalled rollouts: leadership announces an AI initiative, frames it as productivity or cost savings, gives staff access to a tool, sets up a usage metric, and waits. Adoption flatlines. Leadership concludes the staff is resistant. Staff conclude leadership has lost the plot.

What's actually happening: employees are reading the productivity-and-cost-savings framing as layoff preparation, regardless of what executives say in the all-hands. Gartner's 2024 research found 73% of HR leaders report employees experiencing change fatigue, and 74% say managers aren't equipped to lead change. When you stack an AI rollout on top of that without changing the narrative, the rollout gets absorbed into the existing fatigue.

The fix is not communicating harder. The fix is changing what you communicate first. The technology roadmap is a poor opening move. The people narrative — what changes for me, what stays the same, what does my role look like in twelve months — is the right opening move. The technology roadmap can come second.

The play

  1. Audit the change load before adding anything

    List every initiative in flight that affects your team. AI rollout layered on top of three half-finished initiatives is what the research calls "change fatigue." Sequence the AI rollout to land after — or in place of — something else, not on top of it.

  2. Write the people narrative for each affected role

    For each role touched by the rollout, write a one-page document. What changes (specific tasks). What stays the same (specific tasks). What the role looks like in twelve months. Real specifics — not "AI will help you focus on higher-value work."

  3. Equip managers before staff

    Gartner's data is clear — 74% of managers aren't equipped to lead change. Before staff hear about the rollout, managers need rehearsals, scripts, and the FAQ they'll get. Manager-led adoption is dramatically more durable than top-down mandate.

  4. Replace activity metrics with outcome metrics

    Token spend, login frequency, "used AI today" counts get gamed within a week. Use cycle time, error rate, customer outcomes, employee capacity. Outcome metrics resist gaming and give honest signal about whether the rollout is working.

  5. Build a feedback loop and use what it tells you

    Gartner's 2026 CHRO research found organizations that adapt change plans based on employee feedback are 4x more likely to achieve change success. Run a structured 2-week pulse asking what's working, what isn't, what's blocking. Then adjust the rollout. Visibly.

  6. Address the replacement narrative directly, not implicitly

    If you're not going to lay people off, say so specifically. If you can't say that with confidence, don't pretend. Employees can tell when leadership is hedging, and the hedge itself is read as confirmation. Better to say "here's what we don't know yet" than to perform certainty.

What changes at 30 / 60 / 90 days

30 days

Change-load audit complete. Managers have role-specific scripts. Staff hear the people narrative before the tool. Activity metrics replaced with outcome metrics.

60 days

Feedback loop has produced two visible adjustments to the rollout. Manager-staff conversations are happening (not avoided). Adoption pattern shifts from 'compliance' to 'use.'

90 days

Outcome metrics show real change in cycle time or error rate. Manager confidence in leading the change is measurably higher. The rollout has stabilized rather than fizzled.

When this play applies

When is this play the right move?
When adoption is flat or going backwards, when you've heard 'they're just resistant' from leadership, when usage metrics look fine but the work isn't actually changing, or when manager engagement is low. If any two of those are present, this play applies.
How long before we see results?
Most of the work happens in the first 30 days — the change-load audit, the people narrative, the manager enablement. Behavior shift becomes visible at 60 days. Outcome metrics catch up at 90.
Do we have to halt the rollout to run this play?
Usually not. Most teams can run this in parallel with continued tool access — but with the metrics, narrative, and manager preparation realigned. A full halt-and-restart is sometimes warranted, but it's the exception.
What if the people narrative is genuinely 'we're trying to reduce headcount'?
Then the rollout is going to fail no matter what you do, because the staff already knows. The honest move is to separate the AI rollout from the headcount conversation — or to have the headcount conversation directly. Layering AI on top of an unstated layoff plan is the worst combination.
Does this play work for AI specifically, or any change effort?
It works for any change effort. It's especially load-bearing for AI rollouts because the replacement narrative is unusually loud in the broader culture, which makes the people story unusually load-bearing for adoption.

Sources

Related

→ Start here

Text Rosey to begin.

Rosey is our executive-assistant bot. Text the number below — she'll ask two questions, offer three calendar slots, and put a 30-minute call on Jim's calendar.

Text Rosey · Schedule a call →

or call 415 481 2629