A real concern Lincoln leaders raise
AI and data privacy — what mid-market leaders should actually worry about
The data privacy risks of AI in 2026 are concrete and tier-specific. Here's the honest read for Lincoln companies, what each regulator actually says, and where the risk really is.
Text Rosey · Schedule a call →Common questions from Lincoln leaders
- Where is the actual data-leak risk in AI use?
- Concrete: free-tier and consumer-tier AI accounts typically train on user input by default unless turned off. Pasting customer PII, patient PHI, attorney-client communications, donor records, or proprietary code into those tools puts the data into a training corpus you don't control.
- What does HIPAA require for AI use with PHI?
- A Business Associate Agreement with the AI vendor before PHI touches the tool — full stop. Enterprise tiers of major AI vendors offer BAAs; consumer tiers do not. HHS OCR's January 2025 NPRM would also treat AI software touching ePHI as a technology asset that must be in your inventory and risk analysis.
- What about NAIC IGD-H1 if we're an insurer?
- Nebraska's IGD-H1 (June 2024) requires a written AIS Program covering third-party arrangements. AI vendors are third parties; their handling of consumer data flows into your AIS Program documentation.
- What about banking — does OCC 2023-17 apply to AI vendors?
- Yes. OCC Bulletin 2023-17 makes it explicit: 'use of third parties does not diminish or remove a banking organization's responsibility.' AI vendors are third parties under this guidance.
- What about NITC 8-609 for Lincoln vendors contracting with the State?
- If you operate AI on behalf of state agencies, NITC Standard 8-609 governs. Vendors get pulled into OCIO security review and privacy impact assessment workflows. Treat it as binding for state contracting.
- What's the practical first move?
- Three things in parallel: (1) inventory current AI use, (2) stand up enterprise-tier of one or two approved tools with proper contracts, (3) draft an AI use policy that names prohibited data categories specifically.
Sources
- Only 49% of organizations have AI use policies — The State of AI in HR 2026, SHRM (Society for Human Resource Management), 2026
- Only 36% of companies provide a list of approved or preferred AI tools — 8 in 10 Employees Say They Need AI Training — After Their Companies Already Rolled Out the Tools, Express Employment Professionals (Harris Poll fielding), 2026
- Insurers must develop, implement, and maintain a written AI Systems (AIS) Program for the responsible use of AI systems making or supporting decisions related to regulated insurance practices — Model Bulletin on the Use of Artificial Intelligence Systems by Insurers, National Association of Insurance Commissioners (NAIC), 2023
- Use of a third party (including an AI vendor) does not reduce a bank's responsibility for safety, soundness, and consumer protection — Third-Party Relationships: Interagency Guidance on Risk Management, Office of the Comptroller of the Currency (OCC), with FDIC and Federal Reserve, 2023
- Section 1557 prohibits discrimination through the use of patient care decision support tools, including AI/clinical algorithms — Section 1557 Final Rule — Nondiscrimination Through Patient Care Decision Support Tools, HHS Office for Civil Rights, 2024
Related
→ Start here
Text Rosey to begin.
Rosey is our executive-assistant bot. Text the number below — she'll ask two questions, offer three calendar slots, and put a 30-minute call on Jim's calendar.
Text Rosey · Schedule a call →