Health plan administrators should consider AI guardrails

Artificial Intelligence has the potential to transform health and benefits, and group health plan service providers are already using AI in different contexts. However, because much about AI is still unknown or unproven, plan administrators should consider setting guardrails that will encourage responsible use.
General oversight should include the prudent selection and monitoring of service providers who use AI. In addition, review any internal AI use under the group health plan, which may involve applying current legal and compliance requirements in novel ways. Continue watching for legislation, regulation and litigation to unfold and be prepared to adapt plans as needed. Specific steps include:
Ask all third-party service providers about their use of AI for plan design, administration and decision-making purposes. Make the same inquiries within the organization to learn what plan administrative functions are performed using AI. Create an inventory of the plan’s use of AI. Third-Party Administrators and carriers likely are using algorithms to decide many types of claims. Other AI uses might involve advanced analytics, communications, personalized health and wellness, benefit navigation and customer service. Some healthcare providers are using AI in making medical diagnoses. Plan fiduciaries should be aware of internal and service providers’ use of AI and include AI in reviewing service providers and plan management.
Understand and apply current legal and compliance requirements to any AI used under a group health plan. This may require novel interpretations. Here are a few examples to consider. Review and update Health Insurance Portability and Accountability Act privacy and security training, policies and procedures, as well as business associate agreements. Identify any new business associates. Regarding the Mental Health Parity and Addiction Equity Act, identify algorithms that may create a nonquantitative treatment limitation, examine for compliance and include in the comparative analysis. For ERISA claims and appeals procedures, ensure proper timelines and processes are met. Review claims and appeals decided through AI for fraud, waste, abuse and discrimination.
Pay particular attention to the Affordable Care Act Section 1557 nondiscrimination rules. These final rules include a ban on discriminatory patient-care decision support tools, including those using AI or clinical algorithms. Review tools supporting clinical decision-making, such as assessment of a patient’s health risk, prior-authorization requirements or medical necessity determinations. The rules require covered entities to make efforts to identify any tools in health programs or activities that input variables or factors that measure race, color, national origin, sex, age or disability. Covered entities must mitigate the risk of discrimination from using such tools. For example, an algorithm used to target high-risk individuals for additional resources considers costs as a proxy for need. The algorithm may have a racial bias in predicting who needs extra care, since patients of a particular race with the same level of need have lower healthcare costs for various reasons, such as lack of access to care or distrust of the healthcare system. The algorithm falsely concludes that those patients are healthier than equally sick patients who do access care.
Note that the Section 1557 nondiscrimination rules don’t apply directly to most employer’s or group health plan sponsor’s employment practices, including providing employee health benefits. However, the rules often apply to TPAs or carriers working with employer group health plans. The rules will take effect by May 1, 2025.
Watch for legislation, regulation and litigation to unfold. President Trump has revoked an executive order issued by President Biden on the safe, secure and trustworthy development and use of AI, ordering multiple federal regulators to develop strategic plans for the responsible use of AI. A bipartisan group of US senators developed a road map for AI policy in the Senate. Federal regulators are only just beginning to issue AI regulations and Congress appears to be studying the issue, so states may take the first steps in enacting AI legislation. In the meantime, several AI-related lawsuits have been filed:
- The 9th US Circuit Court of Appeals recently resurrected a class action challenging the use of AI to process mental health and substance use disorder claims (Ryan v. UnitedHealth Group). The carrier used an algorithm only for MH/SUD claims to assess progress and refer cases for peer review. Plaintiffs argue this is a more stringent process than what’s used for medical/surgical claims, violating the mental health parity rules and constituting a breach of fiduciary duty.
- Other lawsuits are challenging the use of AI in claims administration on other grounds. For example, one lawsuit alleges that an algorithm allows a TPA’s doctors to automatically deny payments in large batches that do not match certain preset criteria for treatments, evading the physician review process (Kisting-Leung v. CIGNA). Other lawsuits accuse two TPAs of using AI (an algorithm) to override doctors’ recommendations and deny post-acute care for patients in Medicare Advantage plans (Barrows v. Humana and Lokken v. UnitedHealth Group).
Review plan documents and processes to determine what updates need to be made to reflect the use of AI.
Want to see more content like this?
-
Mercer US health news provides the latest observations, research and US healthcare industry news articles.