Blue Sage Data Systems
A real concern Omaha leaders raise

AI and data privacy — what mid-market leaders should actually worry about

The data privacy risks of AI in 2026 are concrete and tier-specific. Here's the honest read for Omaha companies, what each regulator actually says, and where the risk really is (vs. where the headlines say it is).

Lincoln companies asking the same? See the Lincoln view →

Text Rosey · Schedule a call →

Common questions from Omaha leaders

Where is the actual data-leak risk in AI use?
Concrete: free-tier and consumer-tier AI accounts typically train on user input by default unless turned off. Pasting customer PII, patient PHI, attorney-client communications, donor records, or proprietary code into those tools puts the data into a training corpus you don't control. The risk is real and frequent — Express-Harris 2026 found only 36% of companies provide an approved tool list, which is the gap where this happens.
What does HIPAA require for AI use with PHI?
A Business Associate Agreement with the AI vendor before PHI touches the tool — full stop. Enterprise tiers of major AI vendors offer BAAs; consumer tiers do not. HHS OCR's January 2025 NPRM would also treat AI software touching ePHI as a technology asset that must be in your inventory and risk analysis. Section 1557 separately prohibits discrimination through patient-care decision-support tools.
What about NAIC IGD-H1 if we're an insurer?
Nebraska's IGD-H1 (June 2024) requires a written AIS Program covering third-party arrangements. AI vendors are third parties; their handling of consumer data, training-data practices, and security posture all flow into your AIS Program documentation. The Department may request the program during examination.
What about banking — does OCC 2023-17 apply to AI vendors?
Yes. OCC Bulletin 2023-17 (and FDIC FIL-29-2023, FRB SR 23-4) makes it explicit: 'use of third parties does not diminish or remove a banking organization's responsibility.' AI vendors are third parties under this guidance. OCC 2026-13 (April 2026) adds model-risk management on top — though it explicitly excludes generative AI from scope, an interagency RFI on those is anticipated.
Is enterprise-tier AI actually safer?
Materially yes, when configured correctly. Enterprise tiers offer no-training guarantees, data-residency commitments, audit logs, BAA availability for healthcare, and SOC 2 reports. None of those are typical at the consumer tier. The catch: enterprise tier is only as safe as the contract you signed and the configuration you set. Verify both.
What's the practical first move?
Three things in parallel: (1) inventory current AI use, (2) stand up enterprise-tier of one or two approved tools with proper contracts, (3) draft an AI use policy that names prohibited data categories specifically. SHRM 2026 found only 49% of organizations have an AI use policy at all — and that gap is most of the visible risk.

Sources

Related

→ Start here

Text Rosey to begin.

Rosey is our executive-assistant bot. Text the number below — she'll ask two questions, offer three calendar slots, and put a 30-minute call on Jim's calendar.

Text Rosey · Schedule a call →

or call 415 481 2629