What is an AI use policy?
For Omaha mid-market leaders. The clean definition, what should be in one, who signs it, and why most companies don't have one yet.
Text Rosey · Schedule a call →For Omaha mid-market leaders. The clean definition, what should be in one, who signs it, and why most companies don't have one yet.
Text Rosey · Schedule a call →An AI use policy is a written document that defines how your organization uses AI — which tools are approved, which kinds of data are prohibited from being used with AI, who reviews AI-generated output before it leaves the company, and how employees report incidents.
At a minimum, a workable AI use policy includes seven things: (1) the approved tool list and how it's maintained, (2) prohibited data categories — typically PII, PHI, attorney-client privileged material, source code under client NDA, donor records, and any data flagged by your industry regulator, (3) human-in-the-loop requirements for customer-facing or consequential output, (4) escalation paths when something goes wrong, (5) attestation — staff signing off that they've read and understood it, (6) the review cadence (quarterly is the floor), and (7) named owners — who in your organization can answer the policy questions employees will ask.
In regulated industries, the bar is higher. NAIC's AI Model Bulletin (adopted in Nebraska as IGD-H1 in June 2024) requires insurers to maintain a written "AIS Program" — a more structured form of an AI use policy with explicit governance, third-party oversight, testing, and consumer-protection provisions. HIPAA-covered entities have additional duties under HHS OCR's January 2025 NPRM and the Section 1557 final rule on patient-care decision support tools.
Most mid-market companies don't have one. SHRM's 2026 State of AI in HR found only 49% of organizations have AI use policies, and of organizations that do have one, only 25% feel that policy is "future-proof." For nonprofits the gap is even wider: Virtuous's 2026 benchmark found 47% of nonprofits have no formal AI governance policy at all.
The downstream effect of not having a policy isn't usually a regulator inquiry — it's much smaller and much more frequent. It looks like an employee pasting a customer's PII into a free-tier consumer chatbot because they didn't know they couldn't, or a junior staffer using AI to draft donor communications without review, or an HR team using AI to evaluate candidates in a way the bias-mitigation duty under Section 1557 prohibits. Each of those is reputation risk, donor risk, or compliance risk that compounds.
Express-Harris 2026 found only 36% of companies provide a list of approved or preferred AI tools. That gap is where shadow AI lives — and where data leaks happen.
Rosey is our executive-assistant bot. Text the number below — she'll ask two questions, offer three calendar slots, and put a 30-minute call on Jim's calendar.
Text Rosey · Schedule a call →