How to write an AI use policy that holds up
A step-by-step approach for mid-market companies that don't have a policy yet, or have one that isn't working. Designed for fast review by Legal, Security, and HR.
Text Rosey · Schedule a call →A step-by-step approach for mid-market companies that don't have a policy yet, or have one that isn't working. Designed for fast review by Legal, Security, and HR.
Text Rosey · Schedule a call →Drafting an AI use policy that actually holds up follows a specific sequence. The order matters: writing the prose first and figuring out the approved tool list later is exactly how policies become out of date the moment they ship.
**Step 1.** Inventory current AI use. Survey staff anonymously. Map the answers to (tool, role, data type). You will find more shadow AI than you expected — Express-Harris 2026 found 38% of companies allow employees to use any AI tools they're familiar with. Most of that is invisible until you ask.
**Step 2.** Identify your applicable regulators. NAIC + Nebraska IGD-H1 if you write insurance. OCC 2023-17 / FDIC FIL-29-2023 if you bank. HIPAA Security Rule + Section 1557 if you touch PHI. NITC 8-609 if you contract with the State of Nebraska. Pull each rule's actual text — secondary summaries miss specifics.
**Step 3.** Build the approved tool list jointly with IT and Security. Per tool, document: data residency, retention, training-data opt-out, whether enterprise tier is in use, BAA status (for healthcare), SOC 2 / SOC 1 reports. Reject tools that won't disclose.
**Step 4.** Define prohibited data. PII / PHI / attorney-client / source code / donor records / regulator-flagged categories. Be specific. Vague lists ("sensitive data") get ignored.
**Step 5.** Define human-in-the-loop requirements per workflow. Customer-facing output? HITL. Consequential decisions about people (hiring, lending, claims)? HITL plus bias-mitigation per Section 1557. Internal drafting? Reviewer's discretion.
**Step 6.** Define escalation. Who do staff tell when something goes wrong, including suspected data leakage? Single named role, single channel, response SLA.
**Step 7.** Attestation + training. Policy without attestation isn't a policy — it's a wishlist. Tie attestation to the role-specific AI training, not as a separate exercise.
**Step 8.** Review cadence. Quarterly minimum. Specific calendar dates. A named owner runs the review and brings updates to the table — not "as needed."
Rosey is our executive-assistant bot. Text the number below — she'll ask two questions, offer three calendar slots, and put a 30-minute call on Jim's calendar.
Text Rosey · Schedule a call →