An AI Strategy Is a List with a Name
An AI strategy that isn't a list of named initiatives is just a slide. Here's the format that holds up under scrutiny.
Most companies that ask for an AI strategy already have one — they just don’t know it. A 180-person Omaha distributor, a Lincoln community bank’s operations team, a regional insurance brokerage with three offices in the state: they all have a handful of free trials running in different departments, an IT director who has fielded three vendor demos in the past month, and a leadership team that has said “we need to figure out AI” in at least two all-hands meetings without agreeing on what that means. That’s a strategy. It’s just not a written one.
The difference between that and something that actually produces results is smaller than most consultants will tell you. You don’t need a 40-page strategic plan. You need a list — a short, flat, honest inventory of the AI initiatives your company is actually running or actually planning to run — where every row has a name, a date, and a dollar attached to it.
Why “AI strategy” decks fail in execution
Strategy decks fail because they live at the wrong altitude. A 30-slide presentation that covers AI maturity curves, adoption frameworks, and technology landscape maps can be accurate, coherent, and completely useless for running a business in Q3.
The failure mode is specific: the deck describes what the company should be able to do with AI at some point in the future without specifying who is doing what this quarter. When the leadership team gets the deck, they approve it in the meeting and then nothing changes by Friday because there is no Friday owner. “Develop AI capabilities in underwriting” is a goal with no friction. Nobody can fail at it. Nobody can succeed at it, either.
The other failure mode is abstraction without inventory. A strategy that talks about “AI-enabling the customer journey” or “driving operational efficiency through intelligent automation” may be describing real problems, but it hasn’t named the actual workflows — the specific manual process, the specific role spending hours on it, the specific system the automation would need to connect to. Without that inventory, there’s no way to build a project or hold a team accountable.
The minimum viable AI strategy: list, name, date, dollar
A minimum viable AI strategy is a spreadsheet, or a table in a document, with one row per initiative. Every row has four columns:
Name: A short, plain-English name for the initiative that the people running it would use themselves. “Submission intake automation” rather than “Intelligent document processing for underwriting operations.” Names that require a paragraph to explain will not survive a staff meeting.
Date: A go-live target. Not a quarter, not a range — a month. Months create pressure and make it obvious when a project is slipping.
Dollar: The projected value, in dollar terms, of what the initiative recovers — either cost reduction or revenue impact. This doesn’t have to be exact. A range is fine. But it has to be a number. “Improved efficiency” doesn’t belong in this column.
Owner: A single person — by name, not by title — who is accountable for delivery.
That’s the whole document. If you have six initiatives, you have six rows. If the table can’t fit on one page, you have too many initiatives.
How to source the list (interviews, not surveys)
The most reliable way to populate that table is a round of honest conversations with the people who run the work — not a survey, not a suggestion box, not a committee that generates a wish list.
Surveys produce noise. People fill them out quickly and list the things that feel strategically acceptable to complain about. The real bottlenecks — the ones worth building against — are the ones that senior people mention in the first five minutes of a conversation when you ask them where their hours go. Those rarely make it into a survey because they feel like operational problems, not strategic opportunities.
The interviews should cover two groups: the people who do the work day-to-day, and the people who own the P&L for the department. The two groups will describe the same problems differently. The workflow owner will name the friction in operational terms — “we retype every submission into the AMS before we can even start.” The P&L owner will name it in cost terms — “I have two full-time people whose job is basically data entry.” Those two descriptions together give you everything you need to scope an initiative.
Six to eight interviews, 45 minutes each, is usually enough to identify the top three or four candidates for the first row of the table.
The first six rows for a typical mid-market firm
For a Nebraska mid-market firm across the anchor industries, the first six rows of the table tend to cluster around the same categories:
Document intake — submissions, applications, claims, invoices, contracts. Something arrives as a PDF or an email, and a staff member types it into a system. This is almost always the first row, because the hours are obvious and the build complexity is manageable.
Report drafting — committee packets, renewal proposals, status letters, weekly operations summaries. A staff member assembles them by hand from three sources. The AI builds a draft; the staff member reviews and approves.
Internal Q&A — policy manuals, compliance procedures, forms libraries. Staff waste time asking compliance or HR the same questions repeatedly. A well-built retrieval system answers in seconds with the source cited.
Exception routing — invoice holds, flagged transactions, escalation queues. The AI classifies, prioritizes, and routes. The staff member works the queue instead of building it.
Data extraction — pulling structured data from unstructured documents: ACORD forms, grain contracts, equipment invoices. The data gets extracted and posted to the system of record for human review.
Scheduling and intake coordination — intake queues, appointment preparation, order routing. Often a smaller win, but occasionally the one with the fastest time to value.
Not every firm will have all six. Some will have two of them at enough volume to fill the whole first engagement.
What it looks like in practice
A 250-person commercial lines agency in Lincoln finishes its first round of interviews in week two of a Blue Sage engagement. The planning document comes back as a table with five rows. Two of them are clear first choices — submission intake and renewal prep — both with documented baselines, named owners, and specific monthly targets.
The table gets reviewed in a 90-minute session with the COO and department heads. One initiative gets moved from Q3 to Q4 because the AMS upgrade it depends on won’t be done in time. One gets renamed because the original name confused the underwriting team. The dollar column gets tightened from a range to a committed figure for the two builds that will actually start.
The output of that session isn’t a strategy deck. It’s a working document with names, dates, and numbers on it. It lives in a shared folder the COO and IT director both have access to. It gets reviewed monthly. Two rows are marked done by the end of the year.
That’s an AI strategy. It just looks like a list. If you want the longer version of how that list gets sourced, scoped, and ranked before anyone writes a line of code, that’s how Blue Sage runs the planning phase.