TL;DR
- AI vendor pitches are converging on the same demo, the same logos, and the same claims.
- Nine questions, asked in order, separate the vendors who run in production from the ones who demoed well.
- Use them in the second meeting, not the first. The first meeting is for context.
Why now
The number of AI vendor pitches landing in mid-market Asian inboxes has roughly tripled since 2023. IDC tracks more than 14,000 active AI vendors globally as of 2025, up from approximately 4,800 in 2022.[^1] Most pitches blend together. Procurement teams are exhausted, and AI buying decisions are increasingly made on charisma rather than diligence.
This article gives you nine questions to use in your second vendor meeting. Each is designed to surface specific information that vendors who actually run in production will answer well, and vendors who do not will fumble.
The nine questions
1. "Show me a customer of similar size in our region who is in production. Can we talk to them?"
The vendor who has nothing to show pivots to a Fortune 50 logo or a roadmap commitment. The vendor who has real customers names two or three within 30 seconds and offers an introduction within a week.
What you are testing: real production deployment versus marketing surface area. Pilots and proofs-of-concept do not count. Ask explicitly, "In production, with users they did not curate?"
2. "Walk me through what happens when the model returns a wrong answer."
The vendor who handwaves about "human-in-the-loop" without specifics has not built that loop. The vendor who has built it talks about confidence thresholds, escalation paths, audit trails, and the specific UX that flags low-confidence outputs.
What you are testing: production maturity. Real systems handle their failure cases. Demo systems hide them.
3. "What is your latency at p95 and p99 under our expected load?"
The vendor who quotes a single number ("about a second") has not measured. The vendor who has run real load tests gives you a distribution, talks about the long tail, and explains what drives the tail.
What you are testing: engineering rigour. p95/p99 latency is what determines user experience. Average latency is marketing.
4. "How do you handle our data, where does it live, who can see it, and what happens at termination?"
The vendor who has thought about regulated customers gives you a clear answer with named regions, encryption details, sub-processor list, and a data-export commitment. The vendor who has not gives you a vague "enterprise-grade security" answer.
What you are testing: readiness for regulated industries. Even if you are not a regulated industry, the answer reveals operational maturity.
5. "Show me your model evaluation methodology and the eval dataset for our use case."
The vendor who has done this work shows you a curated test set, automated evals running in CI, and a regression methodology. The vendor who has not has only customer demos.
What you are testing: ML engineering discipline. Without evals, the vendor cannot tell you when their model regresses. They will discover it from your complaint.
6. "How does your pricing scale, and what triggers a renegotiation?"
The vendor who has thought about long-term partnership shows you a pricing curve and explains the inflection points. The vendor who has not gives you a one-line discount and changes the topic.
What you are testing: contractual sophistication. Vendors who hide pricing complexity in year one tend to surprise you in year two.
7. "Who at your company is the on-call engineer when our deployment breaks at 2 a.m.?"
The vendor who has a real on-call rotation names a team, an SLA, and a process. The vendor who does not pivots to "your customer success manager will help you."
What you are testing: operational seriousness. Customer success managers do not fix production systems at 2 a.m.
8. "What happens if your funding runs out, your acquirer kills the product, or your founders leave?"
The vendor who has thought about continuity discusses source escrow, contractual exit rights, data export, and runway transparency. The vendor who has not deflects.
What you are testing: vendor longevity risk. Bain's Technology Report 2025 notes that the median AI vendor founded in 2022-2023 has 14 months of cash runway as of mid-2025.[^2] Plan for the failure case.
9. "What is your model and tooling roadmap for the next 12 months?"
The vendor with conviction shows you a roadmap with two or three big bets and explains the trade-offs. The vendor without conviction promises everything to everyone, including features they have not started building.
What you are testing: strategic clarity. Vendors who promise everything build nothing well.
Implementation playbook
How to use the nine questions in practice.
- First meeting: Standard vendor pitch. Take notes. Do not interrogate. The vendor's framing tells you a lot.
- Within 24 hours after the first meeting: Send the nine questions to the vendor in writing. Ask for a 60-minute working session to walk through them.
- Second meeting: Walk through the nine questions. Do not let the vendor change the order. Have a technical reviewer present.
- Within 24 hours after the second meeting: Score each question 1-5. Total out of 45. Below 25 is a hard pass. 25-32 is a conditional consideration. 33+ is shortlistable.
- Reference calls: For shortlisted vendors, take the introduction in question 1. Ask the customer the four questions: production status, failure modes, support quality, and what they would change.
- Final selection: Run the shortlisted vendors through a paid 4-week proof of value with the same evaluation criteria you would apply in production.
What good vendor answers look like
The best vendor answers in 2024-2025 had three traits:
- Specificity over breadth. A precise answer to a narrow question beats a sweeping answer to a broad one.
- Comfort with uncertainty. "We do not know yet, here is how we are testing" beats "we have it covered" when the vendor has not.
- Customer voice in the answer. Vendors who quote real customer experiences in their answers, with permission, have done the work. Vendors who quote only their own marketing have not.
Counter-arguments
"Asking these questions slows down procurement." It does. By two weeks, typically. The alternative is a vendor selection mistake that costs six months and a partial rebuild. Faster is not always cheaper.
"Some vendors will refuse to answer questions like #8." Some will. That is the answer. A vendor unwilling to discuss continuity risk is a vendor whose continuity risk is real.
"The market is moving too fast for this much diligence." The market is moving. Your operating model is not. Vendor selection mistakes are easier to make and harder to unwind in a fast-moving market, not the reverse.
Bottom line
The nine questions are not exotic. They are basic enterprise procurement adapted for AI. Use them in the second meeting. Score honestly. Walk away from vendors who score below 25. The procurement decisions you regret are almost always the ones where you skipped the diligence because the demo was good.
Next read
- Build vs Buy vs Partner: An AI Decision Framework for Mid-Market Asia
- AI Strategy in 90 Days: A Practical Framework for CFOs
By Maya Tan, Practice Lead, AI Strategy.
[^1]: IDC, Worldwide AI Vendor Landscape, 2025, June 2025. [^2]: Bain & Company, Technology Report 2025, October 2025, p. 88.
Where this applies
How AIMenta turns these ideas into engagements — explore the relevant service lines, industries, and markets.
Beyond this insight
Cross-reference our practice depth.
If this article matches your stage of thinking, the underlying capabilities ship across all six pillars, ten verticals, and nine Asian markets.