Blue Sage Data Systems
operator · June 10, 2025

Five Questions to Ask an AI Partner Before You Sign

Five questions a non-technical CEO should ask any AI partner — before signing anything.

Migrated from earlier notebooks

The pitch deck looked good. The demo worked. The partner firm seems to know what they’re talking about, and the reference client was enthusiastic. Now there’s a contract on the desk with a six-month term and a number in it that represents a meaningful commitment.

This is the moment where a Nebraska CEO — one who runs a real business, has a team depending on her, and doesn’t have a CTO on staff to translate the technical language — needs to slow down long enough to ask five specific questions. Not to be obstructionist. Not because the vendor is probably untrustworthy. Because the answers to these questions reveal more about how the engagement will actually go than anything in the pitch deck.

“Where does the data live, and who else can see it?”

This question has two parts, and both matter.

“Where does it live” means: what systems hold the data that this AI tool will process? Is it staying within your existing infrastructure, or is it being sent to a third-party API, a vendor’s cloud environment, a model training pipeline? The answer is usually some combination of the above, and the vendor should be able to name the specific platforms with enough specificity that you can check.

“Who else can see it” means: is your operational data — your customer records, your financial data, your transaction history — being used to train a model that other clients of this vendor will also benefit from? Some AI tools run on shared infrastructure where inputs from multiple clients inform model outputs for all of them. Others are strictly isolated. The difference matters enormously for competitive sensitivity.

For any client in a regulated industry — banking, healthcare, insurance — this question also means: who has executed what agreement regarding your data? A signed business associate agreement for healthcare data, a data processing addendum for personally identifiable information, explicit confirmation that data is not retained after processing. Ask for the specific documentation, not a verbal assurance.

If the vendor is evasive or the answer is “it’s complicated,” that’s information.

“What does success look like at day 90, in numbers?”

Any AI partner worth engaging with should be able to answer this question specifically before the contract is signed. Not in general terms — “you’ll see efficiency gains and better throughput” — but in specific, measurable terms that can be checked against actual outcomes.

For a document processing workflow, success at day 90 might look like: average processing time per document drops from 45 minutes to under 10, exception rate stays below 8%, and the operations team can identify the exception patterns. For a customer communication workflow, it might be: response time on incoming inquiries drops from 6 hours to under 90 minutes during business hours, and the team lead reviews every AI-drafted response before it goes out.

The numbers don’t have to be guaranteed. Responsible vendors will hedge them: “based on similar builds, we’d expect this range, but the actual result depends on X and Y.” That kind of hedging is a sign of experience, not weakness. What you’re looking for is whether the vendor has thought concretely about what this engagement produces at the end of the initial term.

If the answer is genuinely “it depends — we’ll know more after discovery,” hold off on the full contract until discovery is done. A letter of engagement for discovery is different from a six-month build commitment.

“What stays running when you leave?”

This is the dependency question, and it is the most important one for a firm that doesn’t have internal technical staff to maintain what gets built.

Some AI implementations require ongoing vendor involvement to keep running: model updates, prompt tuning, integration maintenance, API version changes. If the vendor is the only person who knows how the tool works, the engagement has a perpetual renewal baked in by design. That’s not inherently wrong, but it should be a deliberate choice, not a hidden architectural fact.

Ask specifically: if we decided to part ways at the end of the initial term, what happens to the tool? Is the code ours? Is it documented to the point where another developer could maintain it? Is there a hosting arrangement we own or one the vendor owns? What would we lose access to if the relationship ended tomorrow?

The operational picture you want: the tool runs on your infrastructure or a standard cloud environment you control, the code is yours, and there is documentation sufficient for a competent developer to maintain it. Anything short of that is a dependency you’re accepting, and you should understand what you’re accepting before you sign.

“Who on my team will own this on day 91?”

An AI tool with no internal owner is not a business asset — it’s a vendor dependency. The ownership question is about more than job title. It’s about who will monitor the tool’s performance, escalate when something breaks, gather feedback from the people using it, and decide when a process has changed enough that the tool needs to change too.

This person doesn’t need to be technical. She needs to be someone who cares about the workflow the tool supports, has standing to make decisions about it, and will actually pay attention to how it’s performing after the initial excitement fades. For most mid-market firms, this is an operations manager, a department head, or a senior individual contributor — not the CEO, not IT.

Ask the vendor what their hand-off process looks like. What does training look like for this person? What documentation will she have at the end of the engagement? What does ongoing support look like if she has a question in month four? A vendor with a clear answer to these questions has built engagements that survived the hand-off before.

“What’s the fastest way to kill this engagement if it’s not working?”

This question makes vendors uncomfortable. That’s why you should ask it.

The answer tells you two things: how the vendor thinks about risk, and whether the contract actually gives you a way out if the engagement goes sideways.

Look for specific terms: the notice period required to exit, what triggers a refund or credit, what constitutes a breach on the vendor’s side that gives you the right to terminate without penalty. If the contract is silent on exit, or if the exit terms require you to pay out the full contract value regardless of performance, you’re committing to a relationship with no leverage if the work doesn’t meet expectations.

A vendor who is confident in their work should be willing to include performance-based exit terms. Not a guarantee of specific outcomes — but a provision that says if the defined success metrics are demonstrably not being approached, the client has the right to exit with reasonable notice and a pro-rated refund for unused prepayment.

If a vendor won’t discuss exit terms until after signing, that’s a negotiation tactic. Walk away from it.

These five questions don’t require technical expertise to ask or to evaluate the answers to. They require the same skepticism a Nebraska business owner applies to any significant vendor relationship. AI engagements are not magic, and the firms that do them well understand that a client who asks hard questions at the contract stage is a client who will be a better partner through the build.

For more on how Blue Sage structures engagements and what we hold ourselves accountable to, see how we work.

→ Start here

Text Rosey to begin.

Rosey is our executive-assistant bot. Text the number below — she'll ask two questions, offer three calendar slots, and put a 30-minute call on Jim's calendar.

Text Rosey · Schedule a call →

or call 415 481 2629