Blue Sage Data Systems
A real concern Omaha leaders raise

We have shadow AI in our company. Now what?

Employees are using ChatGPT, Claude, or Copilot on personal accounts with company data — and you found out the hard way. The fix isn't a ban. The fix is bringing the use into approved channels with the right tools, the right rules, and the right training.

Lincoln companies asking the same? See the Lincoln view →

Text Rosey · Schedule a call →

Common questions from Omaha leaders

Should we ban consumer AI tools immediately?
Almost never works as a first move. A ban without an approved alternative pushes the use further underground — onto personal devices, personal accounts, and home networks where you have zero visibility. The pattern that works is the inverse: provide enterprise-tier ChatGPT, Claude, or Copilot with the right data-handling guarantees, then enforce against unapproved tools. Express-Harris 2026 found only 36% of companies provide an approved tool list at all.
How do we find out how widespread it is?
Anonymous staff survey. Ask: which AI tools have you used for work in the last month? On company accounts or personal? With company data? Most leaders are shocked by the answers. Network logs help, but they miss personal-device use, which is where most of the risk lives.
Is this a fireable offense?
Treating shadow AI as a discipline issue is usually the wrong move. Most shadow AI starts because employees are trying to do their jobs better. Punitive responses kill the trust you need to surface other issues later. The right response is governance + training, not blame.
What about the data that's already gone into ChatGPT?
Depends on the tier. Free-tier and consumer-tier accounts typically train on user input by default unless explicitly turned off. Enterprise tiers have data-residency and no-training guarantees. The first audit is to identify (a) what data went where and (b) what tier was used. For data that went into a free-tier account, the practical answer is usually 'it's gone into the model' — and the response is policy + training going forward, not retroactive containment.
We're a regulated industry. What's the actual risk?
Real and tier-specific. NAIC IGD-H1 (Nebraska, June 2024) requires insurers to maintain a written AIS Program covering third-party arrangements. OCC interagency third-party guidance applies to banks regardless of vendor type. HIPAA-covered entities cannot put PHI into a tool without a Business Associate Agreement. Section 1557 prohibits discrimination through patient-care decision-support tools. Each of those gets harder to demonstrate compliance with the more shadow AI you have.
What's the first move?
Audit, then approve, then train. Audit first (anonymous survey + network logs). Stand up enterprise tier of one or two tools with the right contracts. Publish the approved list. Train staff on what's allowed and what's not. Attestation. Quarterly review. The audit alone cuts most of the risk by surfacing what's happening.

Sources

Related

→ Start here

Text Rosey to begin.

Rosey is our executive-assistant bot. Text the number below — she'll ask two questions, offer three calendar slots, and put a 30-minute call on Jim's calendar.

Text Rosey · Schedule a call →

or call 415 481 2629