ISG Software Research Analyst Perspectives

Three Questions to Evaluate AI-Powered IT Software

Written by Jeff Orr | Apr 2, 2026 10:00:00 AM

Enterprise IT leaders are committing seven-figure budgets to "AI-powered" platforms across ITSM, security and cloud management categories. These contracts promise autonomous remediation, intelligent triage and predictive insights. The problem: most CIOs and CISOs can't articulate what the embedded AI actually does or whether it delivers measurable ROI beyond the software provider's deck.

The market is bifurcated. Some software platforms deploy production-grade machine learning (ML) models that genuinely reduce Mean Time to Resolution (MTTR), automate event correlation or detect anomalous spending patterns. Others have retrofitted legacy architectures with thin AI veneers to maintain competitive positioning. Your procurement due diligence must distinguish between the two before the contract is signed.

Three questions separate real AI capability from marketing. The first addresses the fundamental architecture of autonomous decision-making: what decisions does the AI make autonomously, and what still requires a human?

Most platforms marketed as "AI-powered" are ML-assisted rule engines with probabilistic recommendations layered on top of deterministic workflows. This isn't inherently bad, but you need visibility into the exact handoff points.

Ask providers to map the decision boundary: where does the model act independently versus where it surfaces a recommendation for human review?

In ITSM platforms, does the system auto-route tickets to resolver groups or does it suggest a routing with a confidence score? In XDR or SIEM tools, does the platform quarantine an endpoint based on behavioral analysis or does it escalate to an SOC analyst first?

If the provider can't diagram this boundary or provide documentation on confidence thresholds and override mechanisms, you're evaluating marketing collateral instead of capability.

The second question examines training data requirements and accountability: what data does the model require to train, and who owns the liability when it's wrong?

AI models inherit biases and gaps embedded in their training data. In ITSM, auto-classification models trained on years of poorly categorized tickets will replicate those patterns at scale. In cybersecurity platforms, false negatives carry direct risk exposure. A missed lateral movement signal or an incorrectly deprioritized alert isn't a process inefficiency; it's a potential breach.

Procurement teams should ask: What does model retraining cost in terms of time, data engineering resources and platform downtime? Who is accountable when the AI misroutes a Priority 1 incident or auto-closes a ticket that needed escalation? Contract terms should specify liability boundaries, model performance SLAs and access to explainability features (which variables influenced a given prediction).

The third question strips away pricing premium to reveal core value: can you show me the ROI calculation with and without the AI component?

Strip the AI premium out of the total license cost. Would you still select this platform based on its core workflow engine, integration catalog, user experience and reporting capabilities? If the answer is no, you're paying a significant margin for a feature set that may never reach the promised efficiency gains.

Providers with genuine AI capabilities can produce time-to-value metrics and reference customers with before-and-after data. Look for specifics: ticket volume handled per analyst, percentage of incidents resolved without human intervention, reduction in alert fatigue and cost avoidance in cloud environments. Anecdotes about "improved efficiency" don't substitute for instrumented measurement.

Certain categories have moved beyond experimentation into measurable production impact. AIOps platforms use unsupervised learning for event correlation and root cause inference, reducing noise and accelerating incident resolution. Cloud FinOps tools deploy anomaly detection models to identify spending outliers and recommend rightsizing actions with quantified savings. Identity and access management platforms (IAM, PAM, and Non-Human Identity solutions) apply risk-scoring models to flag anomalous behavior and enforce adaptive authentication policies. These use cases share common traits: well-defined inputs, measurable outputs and clear accountability when the model's recommendation is incorrect.

ISG Research asserts that by 2028, enterprise buyers will decline AI premiums for platforms without autonomous capabilities, resulting in differentiated providers capturing up to 70% of new contracts while legacy vendors experience 25–30% renewal attrition.

CIOs and CISOs should audit their current AI-powered platform investments using this three-question framework. For platforms already deployed, instrument the AI components to measure actual versus projected performance. For platforms in procurement, require providers to answer these questions in writing with specifics, not generalities.

The objective isn't to avoid AI. Genuine AI capabilities deliver competitive advantage when applied to high-volume, repetitive decision-making with clear success metrics. The objective is to avoid paying premium pricing for decorative AI that adds complexity without measurable return.

Software providers should prepare for this level of scrutiny. Buyers are moving past feature checkboxes into operational accountability. If your platform claims AI capability, be prepared to show the architecture, the training data requirements, the decision boundaries and the customer proof points. Transparency will separate market leaders from fast followers riding the hype cycle.

Regards,

Jeff Orr