What is shadow AI?
When employees use AI tools your company hasn't approved — usually on personal accounts, with company data. Why it happens, why it matters, and how to bring it into the light.
Text Rosey · Schedule a call →When employees use AI tools your company hasn't approved — usually on personal accounts, with company data. Why it happens, why it matters, and how to bring it into the light.
Text Rosey · Schedule a call →Shadow AI is AI tool use that happens outside the company's sanctioned, governed channels. Most commonly, it's employees using consumer-grade ChatGPT, Claude, or Gemini accounts (often personal accounts) to do work that involves company data — drafting customer emails, summarizing internal documents, drafting code, writing donor letters, formatting policy text.
It's the most common AI deployment pattern in mid-market companies, and it's almost always invisible to leadership until someone asks.
Express-Harris 2026 found that only 36% of U.S. companies provide a list of approved or preferred AI tools, while 38% allow employees to use any AI tools they're familiar with. SHRM 2026 found only 49% of organizations have AI use policies. The gap between those numbers is shadow AI.
In nonprofits the pattern is more extreme: Virtuous's 2026 benchmark found 92% of nonprofits use AI in some capacity, but 81% on an "ad hoc basis" without shared workflows or documentation, and 47% have no formal AI governance policy.
Shadow AI creates three specific risks that compound the longer they go unaddressed.
**Data leakage.** Free-tier AI tools typically train on user input by default unless turned off. Pasting a customer's PII, a patient's PHI, a donor's record, an attorney-client communication, or proprietary source code into a consumer chatbot can violate privacy laws, regulatory rules (NAIC IGD-H1, OCC third-party guidance, NITC 8-609 for state contracts), or contractual obligations.
**Audit blindness.** When AI use is invisible, you can't audit it. You don't know whose data went where, when, or for what purpose. If a regulator asks, the answer "we don't know" is itself a finding.
**Quality drift.** Shadow AI tends to produce output that misses internal review standards — brand voice, tone, factual accuracy, citation. By the time anyone notices, the drift has been embedded in client-facing work for months.
The standard wrong response is a blanket ban. That doesn't reduce shadow AI; it pushes it underground. The right response is the inverse: an approved tool list with enterprise-grade alternatives, clear prohibited-data rules, and training that makes safe use easier than risky use.
Rosey is our executive-assistant bot. Text the number below — she'll ask two questions, offer three calendar slots, and put a 30-minute call on Jim's calendar.
Text Rosey · Schedule a call →