Artificial intelligence (AI) isn’t a feature gate. It creates value when you’ve defined the decisions you want to improve and you have the data and workflows to support it, with permissions and consent boundaries that hold up in the real world and an audit trail that can explain what happened, why, and what the AI touched.
Too many HR tech buying cycles begin with a shortcut: grab an RFI template, build a checklist and rush into demos that feel productive because they’re concrete. Everyone can ask “can it do X?” Everyone can score what they saw. The problem is that feature questions are rarely anchored in the outcomes you’re accountable for, the constraints of your operating environment or your readiness to change how work gets done. Instead, they’re anchored in the provider’s ability to perform, and HR teams often mistake performance for fit.
I’ve lost track of how often buyers lead with “does it have AI?” or “can it do this thing we saw elsewhere?” Those questions aren’t irrational. They’re a predictable response to time pressure, executive mandates and procurement habits that reward comparability over clarity. But they create the wrong center of gravity. They pull attention toward surface capability and away from the question that decides whether the investment pays off: what happens when this system has to operate inside a messy enterprise?
The better questions aren’t harder because they’re complicated. They’re harder because they require uncomfortable work before the provider conversation starts. Name the problems you’re solving, the decisions you want to improve, the outcomes you’ll measure and the realities of your environment that shape what “good” looks like. Only then can you ask a provider to align to your needs instead of rewarding a feature dump. I’ve made the same point in a prior analyst perspective relative to the AI context: readiness is mostly about meaning, safety and adoption inside the real operating environment, not the existence of a capability.
This is where platform thinking matters, and where it gets misunderstood. When I say platform, I don’t mean “a suite” or “one provider for everything.” I mean the combination of the data model, workflow orchestration, extensibility, governance and experience layer. A platform is what allows you to evolve workflows and data models without having to reimplement or replace the stack every time the business changes. HR environments change constantly. Policies shift. Structures evolve. Regulations tighten. Acquisitions happen. Mandates appear. If the system can’t absorb change gracefully, it doesn’t matter how many features it had on day one.
That’s why “can it do X?” is so often the wrong opening move. It assumes capability exists in a vacuum, independent of the environment it has to operate within. A better framing is to anchor on objectives and constraints, then force the provider to show platform behavior under stress. If your reality includes heavy integrations, frequent policy and process change, and messy identity with segmented access, then the demo should live there.
Start with a cross-domain HR operation that must move across systems. Prove where orchestration lives, what triggers what, and how exceptions route without breaking the chain. Then change a policy midstream and show what must be updated, where, by whom and how you prevent drift when changes happen monthly. Finally, show the permissions truth: partial visibility, restricted populations, approvals that shift with reporting lines and an audit trail that still makes sense outside the happy path. If the answer is “services will build it,” or if change requires reconfiguring logic in multiple places, you’re not looking at platform strength. You’re looking at future rework.
This connects directly to implementation. The uncomfortable truth is that implementation is less about configuration and more about operating model and decision rights. A platform becomes a system of decisions: who owns design choices, how changes are approved, how standards are enforced and how exceptions are handled without eroding the model. If those rules aren’t explicit, drift begins the moment you go live. That’s why I keep coming back to the idea that go-live is the beginning, not the finish line, and why continuous adoption has to be treated as a discipline rather than a slogan.
Now layer AI on top and feature-led selection gets more dangerous. Providers market AI as a feature. Buyers list it as a requirement. When you press on why AI is treated as a gate, you
often hear that an executive team or board mandated it, with limited clarity on what decisions AI should improve, what work should change or how risk will be governed. Pressure to “have AI” is only going to intensify. I assert that by 2029, three-fourths of enterprises will require AI across HCM and adjacent systems for personalized experiences, but buying criteria will increasingly include governance, transparency and data-control requirements. If your data, permissions model, and workflow controls are fragile, AI doesn’t rescue you. It scales the fragility.
Agentic AI raises the stakes further. As agentic capabilities mature, the container where data and processes live becomes less important than the ability of data to move across systems and for agents to connect externally in ways you can govern. Often this is accomplished via model context protocol (MCP) or agent to agent protocol (A2A). In that world, the buying question isn’t “does it have AI?”—it’s “can we govern what it will do, and can we trust the data and workflows it will touch?”
The shift I’m arguing for isn’t subtle. Move from an application mindset to a platform mindset before the provider conversation starts. Align stakeholders on the what and the why long before debating the how. Rewrite criteria so they describe objectives, constraints and change velocity, not a checklist of functions. Design demos that force orchestration, governance and resilience to show up under policy change, integration complexity and messy roles. Define who owns design decisions, how data is governed and how value is measured as an operating model, not a project artifact.
Here’s the warning: If you keep buying HR tech by asking “can it do X,” you’ll keep getting systems that look capable in a demo and feel brittle in production. You’ll keep paying for rework that was predictable at selection time, just invisible because nobody forced reality to surface.
Here’s the inspiration: Platform thinking is not about buying bigger software. It’s buying change capacity—selecting and implementing technology that can evolve as fast as your organization and the world around it, without resetting the foundation every time. Audit your criteria, rewrite your demo script to force reality and treat governance as a mechanism with owners and consequences. Do that, and providers will either meet you at platform level or reveal that they can’t.
Regards,
Matthew Brown
Fill out the form to continue reading.