Blue Sage Data Systems
For Omaha mid-market leaders

How to write an AI use policy that holds up

A step-by-step approach for mid-market companies that don't have a policy yet, or have one that isn't working. Designed for fast review by Legal, Security, and HR.

Lincoln companies asking the same? See the Lincoln view →

Text Rosey · Schedule a call →

Definition

Drafting an AI use policy that actually holds up follows a specific sequence. The order matters: writing the prose first and figuring out the approved tool list later is exactly how policies become out of date the moment they ship.

**Step 1.** Inventory current AI use. Survey staff anonymously. Map the answers to (tool, role, data type). You will find more shadow AI than you expected — Express-Harris 2026 found 38% of companies allow employees to use any AI tools they're familiar with. Most of that is invisible until you ask.

**Step 2.** Identify your applicable regulators. NAIC + Nebraska IGD-H1 if you write insurance. OCC 2023-17 / FDIC FIL-29-2023 if you bank. HIPAA Security Rule + Section 1557 if you touch PHI. NITC 8-609 if you contract with the State of Nebraska. Pull each rule's actual text — secondary summaries miss specifics.

**Step 3.** Build the approved tool list jointly with IT and Security. Per tool, document: data residency, retention, training-data opt-out, whether enterprise tier is in use, BAA status (for healthcare), SOC 2 / SOC 1 reports. Reject tools that won't disclose.

**Step 4.** Define prohibited data. PII / PHI / attorney-client / source code / donor records / regulator-flagged categories. Be specific. Vague lists ("sensitive data") get ignored.

**Step 5.** Define human-in-the-loop requirements per workflow. Customer-facing output? HITL. Consequential decisions about people (hiring, lending, claims)? HITL plus bias-mitigation per Section 1557. Internal drafting? Reviewer's discretion.

**Step 6.** Define escalation. Who do staff tell when something goes wrong, including suspected data leakage? Single named role, single channel, response SLA.

**Step 7.** Attestation + training. Policy without attestation isn't a policy — it's a wishlist. Tie attestation to the role-specific AI training, not as a separate exercise.

**Step 8.** Review cadence. Quarterly minimum. Specific calendar dates. A named owner runs the review and brings updates to the table — not "as needed."

Common follow-up questions

How long does drafting take?
About 4–6 weeks of working sessions, plus 2–3 weeks of Legal and Security review. Faster is possible if you have a tight scope (one role, one tool category) or are using a template; slower is wise if you operate in multiple regulated industries.
Can we just adopt a template and call it done?
Templates are useful for sequence and section structure. They cannot make the organization-specific calls — which tools are approved, which data is prohibited, which workflows require HITL. Plan to use a template for ~30% of the work and your own judgment for the rest.
What if our Legal team doesn't have AI experience?
Most don't yet. The pattern that works: Legal owns regulatory mapping (what laws apply, what the rules require) while you bring AI-specific expertise on the operational sections. Co-drafting is faster than hand-off in both directions.
Should the board approve the policy?
Yes — board adoption signals that AI use is a governance matter, not an IT matter, which protects against the most common failure mode (AI strategy reduced to a software-procurement decision). For insurers under IGD-H1, board oversight of the AIS Program is implicit in the bulletin's governance requirement.
How do we test that it actually works?
Three ways. (1) Spot-check a sample of recent AI-touched work for compliance with HITL and approved-tool requirements. (2) Run a quarterly anonymous survey: 'When you have a question about what's allowed, can you find the answer?' (3) Track incident reports — zero is not the goal; it usually means people aren't reporting. Healthy is 1–3 minor incidents quarter, all caught and resolved.

Sources

Related

→ Start here

Text Rosey to begin.

Rosey is our executive-assistant bot. Text the number below — she'll ask two questions, offer three calendar slots, and put a 30-minute call on Jim's calendar.

Text Rosey · Schedule a call →

or call 415 481 2629