We’ve been talking about healthcare a lot this year. For good reason. The cost of healthcare, especially for employees and individuals, is hitting a breaking point. And the ever-changing rules of benefit management and implementation creates complexity around compliance.
However, there’s another piece of the giant healthcare pie that we also need to talk about.
The use of AI-powered health benefit tools means employers need to look at how they protect employee health data. Because how employers handle this data matters just as much as the actual benefits they offer. In a high-profile example, Google was recently forced to change their health policies tied to third party AI health tools amid employee backlash.
Employees started speaking up when they were prompted to opt into sharing personal health data with a third-party AI platform, seemingly in order to enroll in health benefits. Google later clarified that opting in was not required and acknowledged the communication was unclear. Employees took to the company’s intranet and other messaging platforms to voice concern over who would access their data and how it would be used. More and more companies are looking to third-party AI for benefits, productivity measures, and more. Google’s third-party tool allows employees to see their costs and get plan recommendations. They altered the policy and the communication. But the swift backlash underscores the importance of both communication and protection.
Innovation is great for improving business, even benefits. But there is a growing intersection between tech, privacy, and fiduciary responsibility.
Why it’s a fiduciary issue
Health data is the most sensitive data an employer has. Claims data, medical conditions, and demographic details. For fiduciaries, there’s an obligation to ensure employee data is handled carefully and in compliance with applicable regulations like HIPAA.
When employers introduce new AI-driven platforms that are designed to personalize benefits or improve decision-making they need to ask: Are the employees really choosing to share their data, or does it feel like a requirement?
Consent means employees can opt in without fear of losing access to benefits (or receiving less robust benefits). Anything less ruins trust and creates compliance risk.
Third-party tools don’t reduce an employer’s responsibility
AI and analytics vendors are more common now in the benefits ecosystem, promising better outcomes. But a third party is still the employer’s responsibility.
Plan sponsors are always responsible for:
- How employee health data is collected, stored, and used.
- Ensuring vendors meet privacy and security standards.
- Communicating clearly with employees about ALL their options.
Balance innovation with employee trust
There’s no question that AI can be helpful in improving benefits and help employees make informed choices. But innovation can’t come at the expense of protection.
Consider the language around data usage and put out ongoing education for employees on protections. Plus, review your vendor practices often.
The Google example shows a big shift in how benefits are designed and delivered. And an increased responsibility. We are always here to answer any plan sponsor questions…from compliance to how to audit vendor best practices.
Read more about the case study here.