AI at Work: Powering Trusted Guidance
By Jess Von Bank, Global Transformation & Technology Advisor Leader, Mercer
In the first blog of this series, we explored the gap between AI promises and reality. The takeaway was simple: AI isn't a silver bullet -- like any tool or tech; it's an enabler of outcomes you design. But that raises an important question: how do HR leaders know what outcomes to design for, what tools to trust, and how to separate the signal from the noise?
That's where trusted guidance comes in. And trusted guidance doesn't come from AI "expertise" alone. It comes from being a trusted explainer.
Beyond demos and taglines
If you've been in HR tech for any length of time, you know the playbook. A vendor demo dazzles with slick user experiences, the word "AI" appears on every slide, and the features are described in superlatives. HR leaders walk away impressed at best, overwhelmed at worst, and not necessarily informed or better prepared.
Features and taglines are not enough. HR doesn't just need to see what a tool does; they need to understand how it works, why it matters, and where it fits. Trusted HR & payroll partners don't just sell. They translate. They create common language across HR, IT, and the business. They explain complexity without oversimplifying, and they empower leaders to make informed, confident choices.
Because the stakes are so high, overselling AI isn't just irresponsible, it's a disservice to the outcomes organizations need and must manage responsibly.
What trusted HR AI guidance looks like
Trusted guidance starts with clarity, not complexity. It looks like:
- Explaining the "how," not just the "what." In addition to saying "our tool predicts attrition," a trusted advisor explains: What data is being used? How reliable are the predictions? How should managers act on them?
- Sharing real use cases. Not vague promises, but specific examples: "Here's how one organization reduced payroll errors by 30% using this approach."
- Naming the risks. Trusted partners don't shy away from hard conversations about bias, explainability, governance, or compliance. They help HR leaders understand not just the benefits, but the necessary boundaries.
- Teaching the questions HR should ask. Most leaders don't know what they don't know. Trusted advisors arm them with questions: How transparent is the model? Who owns the data? How will success be measured and optimized?
Steady, honest, responsible guidance builds trust.
Why this matters more than ever
AI is already in your HR systems, even if you don't realize it. From chatbots to scheduling, from recruiting platforms to learning systems, "AI-powered" features are already shaping the employee experience. That reality makes trusted guidance non-negotiable.
Without it, HR risks:
- Implementing black boxes. Tools that make decisions no one understands, eroding employee trust.
- Misusing AI. Applying it where it doesn't fit can lead to wasted investment or unintended consequences.
- Undermining credibility. Overselling AI inside the business, only to fall short on delivery.
The difference between responsible adoption and risky decisions lies in whether HR leaders are guided with clarity or left to navigate the noise alone.
The questions HR leaders should be asking
It's important to be equipped with grounding questions before making decisions.
- Explainability: Can this AI explain its outputs in plain terms that leaders and employees can understand?
- Transparency: What data is being used, for what purposes, and who owns it?
- Governance: How will we monitor the tool for bias, accuracy, and unintended outcomes?
- Accountability: Who is responsible if the AI gets it wrong, and how will it be refined?
- Alignment: How does this tool fit into our broader HR and business strategy?
A trusted advisor doesn't just answer these questions; they bring them to the table.
"Selecting the right AI-partner goes beyond the technology – it is a strategic decision that includes considerations for data quality, data security, integration capabilities, and the certainty of ongoing service support – all working in lockstep across product development, regulatory and legal," says Helena Almeida, Vice President, Managing Counsel and AI Legal Officer at ADP. "If your company makes the wrong decision, it can lead to compliance violations, operational inefficiencies, and loss of trust with key stakeholders.
Cutting through the noise
Clarity builds confidence, but confidence alone isn't enough. Once HR leaders understand what AI can and can't do, the next challenge is ensuring it behaves as intended — reliably, ethically, and accountably. That's the work of governance.
In our next blog, we'll look at what it takes to build the right guardrails around AI — not to slow progress, but to make it sustainable. Because the future of AI at work depends not just on what it enables, but on how responsibly we manage it.
Get our guide: Harnessing artificial intelligence in three key steps
Jess Von Bank is a 23-year industry veteran and passionate advocate for the future of work and talent. With experience as a recruiting practitioner and workforce solutions expert, she helps executives design digital-first cultures that meet both people's expectations and business needs.
A global thought leader in HR transformation, digital experience, and workforce technology, Jess specializes in recruiting, talent strategy, employer branding, DEI, and storytelling. She leads Mercer's Now of Work community and serves as President of Diverse Daisies, a nonprofit empowering girls. Based in Minneapolis, she balances racing for free swag with raising her three daughters.
