In a stark illustration of the potential pitfalls of relying on artificial intelligence (AI) in critical decision-making, two families are taking health insurance giant UnitedHealth Group to court. The families allege that UnitedHealth’s deployment of AI programs in determining medical care coverage has led to questionable denials and shortened rehabilitation stays, with potentially dire consequences.
For years, decisions that profoundly impact individuals’ access to healthcare have been made in the back offices of health insurance companies. However, the recent lawsuit contends that UnitedHealth’s emerging AI technology is now influencing decisions that carry life-altering implications, particularly in denying or curtailing rehabilitation stays for two elderly men in the months preceding their demise.
The families argue that UnitedHealth’s AI is making decisions that are “rigid and unrealistic” concerning patients’ recovery from serious illnesses. These decisions, they claim, result in the denial of care in skilled nursing and rehab centers, which should be covered under Medicare Advantage plans. The lawsuit, seeking class-action status, asserts that allowing AI to override doctors’ recommendations for patients is illegal, emphasizing that such assessments should be the purview of medical professionals.
One central concern raised by the families is that the insurance company, using AI algorithms to determine coverage plans, overrides doctors’ recommendations despite a purportedly high error rate within the AI program. The families allege that more than 90% of patient claim denials were eventually overturned through internal appeals or by a federal administrative law judge. However, a minuscule percentage of patients, a mere 0.2%, chose to contest claim denials through the appeals process.
The families and their legal representation argue that UnitedHealth’s strategy appears geared towards prioritizing profits over the well-being of those they are contractually obligated to cover. Ryan Clarkson, an attorney involved in several cases against companies employing AI, decried the situation, stating, “It’s just greed.”
UnitedHealth responded to the allegations, asserting that the AI program, specifically naviHealth’s AI, mentioned in the lawsuit, is not utilized to make coverage determinations. The company clarified that the tool is intended as a guide to inform providers, families, and caregivers about the type of assistance and care patients may require both within the facility and after returning home.
This lawsuit underscores the growing concerns about the unchecked use of AI in critical decision-making processes, raising questions about transparency, accountability, and the ethical considerations surrounding AI’s role in healthcare and insurance. As the legal battle unfolds, it prompts a broader conversation about the need for responsible AI deployment and regulatory frameworks to safeguard against unintended consequences in vital sectors such as healthcare.