Blue Sage Data Systems
AI policy and governance, plainly

What is an AI use policy?

For Lincoln mid-market leaders. The clean definition, what should be in one, who signs it, and why most companies don't have one yet.

Omaha companies asking the same? See the Omaha view →

Text Rosey · Schedule a call →

Definition

An AI use policy is a written document that defines how your organization uses AI — which tools are approved, which kinds of data are prohibited from being used with AI, who reviews AI-generated output before it leaves the company, and how employees report incidents.

At a minimum, a workable AI use policy includes seven things: (1) the approved tool list and how it's maintained, (2) prohibited data categories — typically PII, PHI, attorney-client privileged material, source code under client NDA, donor records, and any data flagged by your industry regulator, (3) human-in-the-loop requirements for customer-facing or consequential output, (4) escalation paths when something goes wrong, (5) attestation — staff signing off that they've read and understood it, (6) the review cadence (quarterly is the floor), and (7) named owners.

In regulated industries, the bar is higher. NAIC's AI Model Bulletin (adopted in Nebraska as IGD-H1 in June 2024) requires insurers to maintain a written "AIS Program." HIPAA-covered entities have additional duties under HHS OCR's January 2025 NPRM and the Section 1557 final rule. Lincoln-based vendors contracting with the State of Nebraska may be pulled into NITC Standard 8-609 obligations.

Why it matters for Lincoln companies

Most mid-market companies don't have one. SHRM's 2026 State of AI in HR found only 49% of organizations have AI use policies, and of organizations that do have one, only 25% feel that policy is "future-proof." For nonprofits, Virtuous's 2026 benchmark found 47% have no formal AI governance policy at all.

The downstream effect of not having a policy isn't usually a regulator inquiry — it's smaller and more frequent. It looks like an employee pasting a customer's PII into a free-tier consumer chatbot, or a junior staffer using AI to draft donor communications without review, or an HR team using AI to evaluate candidates in a way the bias-mitigation duty under Section 1557 prohibits.

Express-Harris 2026 found only 36% of companies provide a list of approved or preferred AI tools. That gap is where shadow AI lives — and where data leaks happen.

Common follow-up questions

How long should an AI use policy be?
Long enough to answer the questions employees actually ask. Most workable policies run 8–15 pages. Anything shorter usually leaves the hard parts unspecified; anything longer rarely gets read.
Does an AI policy belong in the employee handbook or as a standalone document?
Standalone, with a one-paragraph reference in the handbook. The handbook is updated annually; AI policies need quarterly review.
Who in the organization should own the policy?
Joint ownership: a Legal/Compliance owner, a CIO/IT owner for the approved tool list, an HR owner for attestation and training. For insurers, the AIS Program owner under NAIC §4 is named. For healthcare, the Privacy or Security Officer is in the chain of approval.
What happens if we just don't have one?
Most likely: the policy effectively gets written by individual employees, one decision at a time. That's where data-leak incidents come from. Less likely but worse: a regulator examination (insurers under IGD-H1, healthcare under HIPAA) where the absence of a written program is itself a finding.
Is a generic template AI policy good enough?
As a starting point, sometimes. As the final version, no — because the hard parts are organization-specific. A template saves 30% of the drafting work; the other 70% is your specific judgment.

Sources

Related

→ Start here

Text Rosey to begin.

Rosey is our executive-assistant bot. Text the number below — she'll ask two questions, offer three calendar slots, and put a 30-minute call on Jim's calendar.

Text Rosey · Schedule a call →

or call 415 481 2629