A real concern Omaha leaders raise
AI and data privacy — what mid-market leaders should actually worry about
The data privacy risks of AI in 2026 are concrete and tier-specific. Here's the honest read for Omaha companies, what each regulator actually says, and where the risk really is (vs. where the headlines say it is).
Text Rosey · Schedule a call →Common questions from Omaha leaders
- Where is the actual data-leak risk in AI use?
- Concrete: free-tier and consumer-tier AI accounts typically train on user input by default unless turned off. Pasting customer PII, patient PHI, attorney-client communications, donor records, or proprietary code into those tools puts the data into a training corpus you don't control. The risk is real and frequent — Express-Harris 2026 found only 36% of companies provide an approved tool list, which is the gap where this happens.
- What does HIPAA require for AI use with PHI?
- A Business Associate Agreement with the AI vendor before PHI touches the tool — full stop. Enterprise tiers of major AI vendors offer BAAs; consumer tiers do not. HHS OCR's January 2025 NPRM would also treat AI software touching ePHI as a technology asset that must be in your inventory and risk analysis. Section 1557 separately prohibits discrimination through patient-care decision-support tools.
- What about NAIC IGD-H1 if we're an insurer?
- Nebraska's IGD-H1 (June 2024) requires a written AIS Program covering third-party arrangements. AI vendors are third parties; their handling of consumer data, training-data practices, and security posture all flow into your AIS Program documentation. The Department may request the program during examination.
- What about banking — does OCC 2023-17 apply to AI vendors?
- Yes. OCC Bulletin 2023-17 (and FDIC FIL-29-2023, FRB SR 23-4) makes it explicit: 'use of third parties does not diminish or remove a banking organization's responsibility.' AI vendors are third parties under this guidance. OCC 2026-13 (April 2026) adds model-risk management on top — though it explicitly excludes generative AI from scope, an interagency RFI on those is anticipated.
- Is enterprise-tier AI actually safer?
- Materially yes, when configured correctly. Enterprise tiers offer no-training guarantees, data-residency commitments, audit logs, BAA availability for healthcare, and SOC 2 reports. None of those are typical at the consumer tier. The catch: enterprise tier is only as safe as the contract you signed and the configuration you set. Verify both.
- What's the practical first move?
- Three things in parallel: (1) inventory current AI use, (2) stand up enterprise-tier of one or two approved tools with proper contracts, (3) draft an AI use policy that names prohibited data categories specifically. SHRM 2026 found only 49% of organizations have an AI use policy at all — and that gap is most of the visible risk.
Sources
- Only 49% of organizations have AI use policies — The State of AI in HR 2026, SHRM (Society for Human Resource Management), 2026
- Only 36% of companies provide a list of approved or preferred AI tools — 8 in 10 Employees Say They Need AI Training — After Their Companies Already Rolled Out the Tools, Express Employment Professionals (Harris Poll fielding), 2026
- Insurers must develop, implement, and maintain a written AI Systems (AIS) Program for the responsible use of AI systems making or supporting decisions related to regulated insurance practices — Model Bulletin on the Use of Artificial Intelligence Systems by Insurers, National Association of Insurance Commissioners (NAIC), 2023
- Use of a third party (including an AI vendor) does not reduce a bank's responsibility for safety, soundness, and consumer protection — Third-Party Relationships: Interagency Guidance on Risk Management, Office of the Comptroller of the Currency (OCC), with FDIC and Federal Reserve, 2023
- Section 1557 prohibits discrimination through the use of patient care decision support tools, including AI/clinical algorithms — Section 1557 Final Rule — Nondiscrimination Through Patient Care Decision Support Tools, HHS Office for Civil Rights, 2024
Related
→ Start here
Text Rosey to begin.
Rosey is our executive-assistant bot. Text the number below — she'll ask two questions, offer three calendar slots, and put a 30-minute call on Jim's calendar.
Text Rosey · Schedule a call →