'You still have to have a human in the back feeding the bot what you want it to say,' says MPAC's Lana Chaim
The wave of AI adoption in HR isn’t just coming, it’s already here. But as more tools claim to revolutionize talent management and streamline workforce operations, plan sponsors are left asking a fundamental question: is this AI boom all hype, or can these solutions actually deliver a return on investment?
“AI holds huge potential to make our work faster, smarter, and more personalized. But with every promise comes a warning,” said Dialogue’s Cameron Moore, vice-president, product and design at Dialogue’s Sparking Conversations symposium on Tuesday. “There certainly are risks, around privacy, bias, over-reliance, and whether some tools even deliver the ROI that they claim.”
That tension, between potential and peril, is where HR leaders are calling out one of AI’s limitations: the lack of human elements.
“What you can’t screen for is things like culture fit, personality, or if this is someone I want to hang out with every day for eight hours,” said Julie Arsenault, referring to AI’s current use in resume screening. “You still need a human to do that.”
Yet, Darren Steeves, director of consulting services at CGI, noted a growing emphasis on ethical AI and the importance of keeping “human in the loop” as he believes the real shift is moving from the idea of AI “replacing people to enabling them to repurpose their time”, allowing HR professionals to focus more on supporting employees and less on administrative burdens.
When asked how the leaders are using AI in benefits selection and as part of the ROI strategy, Arsenault, senior director of total rewards and HR technology at Dentalcorp, explained the organization uses AI to personalize benefits selection for employees, leveraging a machine learning model that tailors recommendations based on individual needs.
“Team members can actually go in and say, 'This is kind of my circumstance',” she said, referencing inputs like prescription use or existing life insurance coverage. While AI has proven valuable in helping employees choose the right package, she drew a clear line when it comes to vendor relationships.
“I truly believe that that’s such a personal interaction. I don’t think AI can replace that in-person connection with that vendor,” added Arsenault.
For Lana Chaim and Municipal Property Assessment Corporation (MPAC), the firm is still in the early stages of applying AI to employee benefits but is taking steps to build a more informed strategy.
Using their research chatbot, the organization is developing a system to “consolidate and centralize a lot of our employee queries,” explained Chaim, director of total rewards, wellness and workforce analytics at MPAC.
By tracking the types of benefit-related questions employees are asking, MPAC aims to uncover patterns and insights that will shape future offerings and improve how benefits are communicated and managed internally.
Chaim also underscored both the value and limitations of AI in recruitment. While AI can offer significant efficiency gains, she stressed that “recruitment is not straightforward” and still requires a human touch.
She pointed out that understanding a candidate’s journey and background involves nuances that can’t be picked out from an AI system or tool. Yet, she recognized the need for scalability and speed AI brings, especially when businesses are under pressure to fill roles quickly.
Addressing the potential privacy concerns that come with implementing AI in HR tools, Steeves acknowledged the skepticism but urged HR leaders not to let fear override common sense. He argued that AI tools have improved significantly, particularly in how they manage and secure personal data through cloud-based systems tailored to individual users.
His advice to organizations is to “just do your due diligence,” adding that with so many tools flooding the market, he emphasized the importance of vetting for privacy and security, and consulting internal experts or trusted partners to assess whether AI vendors are truly protecting sensitive information.
For Arsenault, the importance of responsible AI use is at the individual level, noting that privacy and quality controls are only part of the equation.
“You as a human using AI can also prompt it in a way where you're not exposing company proprietary information or data,” she said, adding that it ultimately comes down to being intentional with how plan sponsors interact with the tool, avoiding direct references to sensitive data and using placeholder language when necessary.
When it comes to engagement and sentiment analysis, AI can offer tangible upside, especially where traditional tools fall short.
For example, Chaim explained that AI adoption at MPAC became widespread well before any formal guidelines were in place, referencing tools like Copilot that became part of daily workflows. In response, MPAC developed a detailed AI use policy to guide staff on safe and appropriate practices. The framework warns employees to avoid entering personal or confidential data and encourages modifying AI-generated content to prevent it from being reused or learned by the system.
When asked about how chatbots are being used in the workplace, Arsenault pointed out that while chatbots can manage basic HR tasks like checking PTO or referencing policies, they don’t have “that human element of being able to bring the empathy to the table.”
For her, AI lacks the emotional intelligence needed for complex issues like harassment or interpersonal conflict.
“You still have to have a human in the back feeding the bot what you want it to say,” she added.
However, Steeves acknowledged that while chatbots had a rough start originally, he argues they’ve since evolved, particularly in back-end support roles.
He highlighted use cases in contact centers where AI not only assists in real time but also transcribes calls and identifies the right forms or actions, tools that reduce hold times and streamline service.
This “stepped fashion,” as he described it, allows AI to manage simpler tasks while reserving complex, sensitive issues, like harassment or mental health concerns, for human professionals.


