Thinkscoop
Strategy

What We Learned Reviewing 50+ AI Vendor Pitches (2022–2025)

Thinkscoop Engineering Jul 30, 2025 13 min read
What We Learned Reviewing 50+ AI Vendor Pitches (2022–2025)

Between 2022 and 2025 we sat on both sides of the table for AI vendor pitches. The patterns that separated serious partners from slideware were surprisingly consistent.

AI vendor pitches changed a lot between early 2022 and late 2024. The buzzwords evolved, the slide templates got prettier, but a few simple questions consistently cut through the noise. Whether you were buying or selling AI capabilities, these questions predicted how the next twelve months would feel.

After sitting in on more than fifty of these conversations - as advisors to buyers, as evaluators of partners, and occasionally on the selling side - we noticed the same signals repeating. The vendors who delivered well telegraphed it in how they talked about past work. The ones who struggled did too.

Look for Evidence, Not Aspirations

Serious vendors could point to specific deployments: what they built, how it performed, what went wrong, and how they fixed it. Less serious ones leaned on generic claims, impressive but context-free metrics, or unnamed global banks that nobody could reference.

The most useful probe was asking a vendor to walk through a recent deployment in detail: the use case, the data, the architecture, the evaluation approach, and what they would do differently next time. Vendors who had actually done the work could answer fluently. Those relying on templates and borrowed case studies got vague quickly.

Ask About Incidents Up Front

Our favourite question from 2023 to 2025 was simple: tell us about your last AI incident and what you changed afterwards. The best partners lit up - finally, a grown-up conversation. They described specific failures, the root causes they found, and the process changes that followed. The weakest faltered or claimed they had never had a problem, which told us everything we needed to know.

Why this question works

Any team doing real production AI has had incidents. A vendor who claims otherwise either has no production deployments or has a culture that hides problems. Both are disqualifying for a serious partnership.

Data and Governance Answers Should Be Specific

Between 2022 and 2025, enterprise ready became the most overloaded and underspecified claim in AI vendor sales. Every deck said it. Almost none of them answered the questions that enterprise procurement teams actually cared about.

  • Where does customer data go, and what is the data residency model? (Not 'we are SOC 2 compliant' - where, specifically?)
  • What is your data retention and deletion policy for inference logs and fine-tuning data?
  • How do you handle model drift and degradation after a deployment is live?
  • What SLA do you offer for inference latency, and what happens when you miss it?
  • Can we see your standard DPA and model card, or do you not have one?

Reference Calls: Engineers Over Executives

Executive references from a vendor's satisfied clients are almost always positive - they have been pre-selected and briefed. The more valuable conversation is with an engineer or technical lead who used the vendor's platform day-to-day. Ask for an engineering reference, and if the vendor hesitates, note it.

Questions worth asking an engineering reference: what was the hardest integration challenge and how was it resolved? How responsive was the vendor's support during incidents? What would you not use this vendor for? The willingness to answer the third question candidly is itself a data point about the relationship.

Shared Success Metrics: The Due Diligence Nobody Does

One of the most reliable predictors of a good AI vendor engagement was whether both parties had written down, agreed, and signed off on what success looked like at 90 days and 12 months before the contract was signed. Projects with explicit, measurable shared success criteria were easier to manage, easier to course-correct, and ended more amicably when they did not go as planned.

  1. 1Define at least two quantitative success metrics per major deliverable - not just go-live date
  2. 2Agree on the evaluation method before starting: who runs it, on what data, against what baseline
  3. 3Include a 90-day review checkpoint where either party can re-scope without penalty
  4. 4Be explicit about what partial success looks like - not every AI project hits all its targets, and having a framework for that conversation in advance is valuable
  5. 5Put the success criteria in the contract, not just the statement of work - it changes the accountability dynamics

Building something in this space?

We'd be happy to talk through your use case. No pitch - just an honest conversation about what's feasible.

Book a 30-minute call

Key takeaways

  • Logos and slideware were weak predictors of delivery quality
  • The best vendors talked candidly about incidents and failures
  • Detailed data and governance answers beat vague "enterprise ready" claims
  • Reference calls with engineers revealed more than executive sponsors
  • Clear shared success metrics made projects safer for everyone
Back to all articles