AI: Your employee health and benefits copilot 

Employee health and benefits programs need an overhaul. HR and Risk managers view the increasing cost of health and benefits as the number one people risk globally according to new People Risk 2024 research. And most employees (82%) feel at risk of burnout as found in the Health on Demand survey of more than 17,000+ employees. However, leading organizations can use artificial intelligence (AI) to bring down the risk of burnout and control costs.

AI systems and capabilities might drive better outcomes more broadly, but several forces may be holding it back. Realizing the full potential of AI requires high-quality data, comprehensive governance plans, technological innovation, and improved access to care.

How did health and well-being become unsustainable?

The rising cost of care strains healthcare delivery systems and everyone who depends on them. One measure to gauge these expenses is the medical trend rate: the year-over-year change in claims cost per person. Mercer Marsh Benefits (MMB) found that for 2024, insurers predict an 11.7% increase in medical trend rate outside the US — potentially the fourth double-digit increase in as many years.  Employers are aware of this problem; nearly four in 10 (37%) are concerned about medical costs increasing beyond general inflation.

Why does the medical trend keep climbing? According to MMB’s Health Trends 2024 report, 86% of insurers believe much of the 2023 increase was due to medical inflation: the rising cost of health equipment and services. Providers would echo that sentiment: The American Hospital Association notes that supply chain pressures drove up the cost of medical supplies by 18.5% from 2019 to 2022.

On the services side, the healthcare talent pipeline is getting weaker and more expensive according to People Risk 2024 research. The sector’s HR leaders report that skills shortages and rising labor costs are their top two workforce challenges this year, and nearly half of executives (46%) believe their current talent models will struggle to meet demand. The talent agrees: According to the survey that underpins Mercer’s Global Talent Trends 2024, four in five healthcare employees feel close to burnout and 29% are planning to quit.

Benefits utilization has been another key cost inflator, according to three out of four insurers in 2023. This dynamic makes it more costly for those managing health plans to manage and pay claims, which in turn leads to higher premiums. Employers can respond by optimizing their health plan design for high-quality care and smart cost-sharing. What’s more, leaders can effectively guide employees toward desired behaviors; encouraging outpatient treatment within provider networks could be an ideal way to start.

Despite rising costs, health risks and operational challenges, insurers believe that employers in 2024 will prioritize making plan improvements to attract and engage talent (57%) over reducing plan coverage to manage cost (43%). It’s a noble ambition, but for such an investment to yield returns — especially in today’s climate — the whole system needs a reset.

Realizing the promise of AI in health and benefits

Armed with the power to learn, analyze, predict and create, AI can help solve some of the biggest problems in health and wellness. The rise of large language models (LLMs) means AI has the potential to supercharge employee health and benefits through increased efficiency and so much more. Productivity gains may be stealing the headlines today, but it’s in AI’s ability to predict and personalize that we see the promise of a better tomorrow. 

Navigating health and benefits AI risks and impacts

Navigating health and benefits AI risks and impacts (a sample list) – Covers key AI related risks including errors/misinformation/ bias, and security/ data vulnerabilities and lack of human reasoning. The AI impact on health relates to operational efficiencies, clinical care and imaging, personalized medicine and predictive analytics. The AI impact on benefits includes advanced analytics, benefit navigation, communication and customer service.

AI’s potential impact on employee health:

For employers and their people, AI can drive tremendous cost savings and better patient outcomes. Potential applications include:

Some generative AI tools can create “digital twins” or models, such as images of healthcare facilities, and map out floor plans to help optimize workflows and operations. Other systems can schedule appointments, predict wait times, and field common patient inquiries to maximize efficiency and reduce the burden on workers.

AI’s potential impact on employee benefits:

For benefits professionals and the employees they support, AI offers a range of ways to provide relevant health benefits, elevate the employee experience and improve access to care. Possible use cases include:

When asked what would most improve their compensation, 45% of workers chose more types of rewards and personalization. Benefits experts comb through stacks of data to determine what’s available in the market, which options their firms can provide, and whether employees want or need certain programs. AI can analyze it all in a fraction of the time, and offer suggestions to support more informed decision-making.

AI’s potential impact on providers and insurers

AI’s potential impact on health and insurance firms can stimulate cost savings and better outcomes for employers and people. How might AI impact the return on investment on benefits spend?. When evaluating plans and provider networks, consider these use cases:

Clinicians often cite electronic health records (EHRs) as a cause of job dissatisfaction and burnout. AI tools such as DAX Copilot can help manage EHRs to reduce the burden on provider operations.

Some of these applications are too new and untested to implement safely today, but they certainly change how we think about our current tech landscape and the benefits ecosystem. To wield AI effectively, leading institutions are not just embracing the art of the possible — they are taking steps to mitigate known and emerging risks.

Top-of-mind concerns about AI adoption in employee health and benefits

Some concerns about AI are already in focus. We know LLMs can hallucinate, producing flawed outputs without warning and making suggestions that, without validation, can have dire consequences. We’ve seen early versions of certain tools echo the biases that lurk in their training data. As we connect these systems to patient records and life-or-death decisions, the stakes couldn’t be higher. Here are the key risks and roadblocks to watch for.

Data concerns

Data is the backbone of AI — LLMs need droves of it for peak performance. Yet healthcare and benefits data is highly regulated, and different countries have different rules for using and sharing it. Without a universal standard, these regulations could make it immensely difficult to build and use AI tools for healthcare needs.

Data quality is another big issue. Healthcare data comes in multiple formats, and merging or converting between them could increase the risk of errors. Biased AI training data may not reflect certain patient populations or the latest standards and findings — potentially driving flawed and high-risk decisions. What’s more, a lack of transparency in the sources of training data can make it difficult for humans to fact-check an AI model’s outputs.

One solution to the data dilemma is synthetic dataartificial information that’s created electronically to support predictive analytics, software development and machine learning. Synthetic data is faster and cheaper to acquire than real-world data; it helps close the gaps in incomplete datasets, and even mimics patient data without the personal identifiers that fuel privacy concerns. 

Regulation and governance

Consider the issue of medical indemnity liability: If health and benefits workers use AI to help them counsel, diagnose and decide, who bears responsibility when something goes wrong?

Given its potential impact on public health and human rights, AI for health and well-being is subject to tight regulations and scrutiny from officials. A number of governments are rolling out their own laws, though the European Union’s (EU's) AI Act is perhaps the most comprehensive to date.

The act lays out several functions and features of AI systems that would qualify them as high-risk and thus subject them to additional standards. AI tools that control access to health benefits, support critical treatment decisions, or use biometrics for certain tasks are all deemed high-risk.

To comply with regulations like the EU’s, companies can strengthen controls and decision-making to close emerging governance gaps. Since the act refers to “use cases” in how LLMs and domain-specific tools are applied, greater attention to these areas will be required. Organizations may also need to strengthen their ethical AI policies to ensure a human-centric approach to AI adoption, with diverse input and potential risks embedded into early thinking.

Stifled innovation

One popular critique of AI regulation is that it curbs investments and risk-taking, which drive innovation. Yet given the outsized role of healthcare and benefits in people’s well-being, the “safe to fail” approach that works for some industries could have dire consequences for public health and patient confidentiality. For healthcare applications, it would be better for AI developers to follow the medical mantra: “First, do no harm.”

From executives to investors and politicians, the number of different participants in health and well-being also makes it difficult to spark change. AI startups, for instance, face stiff competition from legacy software vendors controlling the market.

Healthcare access

Socioeconomic differences can drive massive health disparities. People from rural, low-income and/or underrepresented communities tend to face more barriers — financial, linguistic, technological, educational and logistical — to effectively accessing healthcare resources, including benefits. The advent of AI models that support more languages than ChatGPT may close knowledge gaps, but not developmental ones.

In theory, healthcare delivery systems could use AI to bridge these gaps. AI can support cost savings, personalization, translations, and efficiencies so more patients can benefit. But the obverse is also true.

AI’s potential to fuel bias and discrimination is especially problematic in health and benefits. It could help identify and refuse coverage for high-risk populations and individuals — those who need care the most. And low-quality training data that doesn’t include certain groups’ medical history could inadvertently drive decisions that aren’t in their best interest.

Optimizing healthcare and employee well-being

The only threat greater than those above is the risk of inaction. Busy benefits professionals — and the vendors they choose — need the kind of human-centric productivity that only work design, assessments and upskilling can provide. AI can help solve the puzzle, but only digital-first cultures will unlock its full potential. For insights and support in building these cultures, contact a consultant.

Since being digital is a firmwide endeavor, one potential challenge is resistance to change. Employees might avoid using AI-powered health resources. Older workers, who tend to file more claims, might be especially AI-averse. By using generative AI to personalize benefits packages and related communications, leaders can more effectively build buy-in and take-up with different persona groups.

For employers more broadly, comprehensive, affordable healthcare, including mental health, and a robust benefits strategy, including active cost management, can boost the corporate immune system by helping identify, predict, and mitigate risks to both the enterprise and its people. Organizations can use health benefits and other rewards to address these risks, do the right thing for employees and society, and even build trust and equity in the process. Discover how solutions from MMB can lift employee well-being programs with insights and analytics — or, consult an expert to learn more.

 

The contents of this article are intended to convey general information only and not to provide legal advice or opinions.

About the author(s)