What is an AI use policy?
For Lincoln mid-market leaders. The clean definition, what should be in one, who signs it, and why most companies don't have one yet.
Text Rosey · Schedule a call →For Lincoln mid-market leaders. The clean definition, what should be in one, who signs it, and why most companies don't have one yet.
Text Rosey · Schedule a call →An AI use policy is a written document that defines how your organization uses AI — which tools are approved, which kinds of data are prohibited from being used with AI, who reviews AI-generated output before it leaves the company, and how employees report incidents.
At a minimum, a workable AI use policy includes seven things: (1) the approved tool list and how it's maintained, (2) prohibited data categories — typically PII, PHI, attorney-client privileged material, source code under client NDA, donor records, and any data flagged by your industry regulator, (3) human-in-the-loop requirements for customer-facing or consequential output, (4) escalation paths when something goes wrong, (5) attestation — staff signing off that they've read and understood it, (6) the review cadence (quarterly is the floor), and (7) named owners.
In regulated industries, the bar is higher. NAIC's AI Model Bulletin (adopted in Nebraska as IGD-H1 in June 2024) requires insurers to maintain a written "AIS Program." HIPAA-covered entities have additional duties under HHS OCR's January 2025 NPRM and the Section 1557 final rule. Lincoln-based vendors contracting with the State of Nebraska may be pulled into NITC Standard 8-609 obligations.
Most mid-market companies don't have one. SHRM's 2026 State of AI in HR found only 49% of organizations have AI use policies, and of organizations that do have one, only 25% feel that policy is "future-proof." For nonprofits, Virtuous's 2026 benchmark found 47% have no formal AI governance policy at all.
The downstream effect of not having a policy isn't usually a regulator inquiry — it's smaller and more frequent. It looks like an employee pasting a customer's PII into a free-tier consumer chatbot, or a junior staffer using AI to draft donor communications without review, or an HR team using AI to evaluate candidates in a way the bias-mitigation duty under Section 1557 prohibits.
Express-Harris 2026 found only 36% of companies provide a list of approved or preferred AI tools. That gap is where shadow AI lives — and where data leaks happen.
Rosey is our executive-assistant bot. Text the number below — she'll ask two questions, offer three calendar slots, and put a 30-minute call on Jim's calendar.
Text Rosey · Schedule a call →