Shortly after UnitedHealthcare CEO Brian Thompson was gunned down in front of the New York Hilton, previously published reports began surfacing about his company's use of AI to deny health care coverage to patients, especially people over 65 and people with mental illness.
A Senate report published in October found that UnitedHealthcare's denial rate for post-acute care jumped from 10.9% in 2020 to 22.7% in 2022. Its use of AI models in deciding which patients could go to rehab after leaving a hospital led to denial rates that were nine times higher in 2022 than they were in 2019. UnitedHealthcare did not respond to a request for comment.
A class-action lawsuit was filed against the company last year for its use of AI to "wrongfully deny elderly patients care owed to them under Medicare Advantage Plans by overriding their treating physicians' determinations as to medically necessary care based on an AI model that defendants know has a 90% error rate," according to the complaint.
"Companies are using this technology to do a lot more denials, a lot of inappropriate denials," Dr. Ashish Jha, Brown University School Of Public Health dean, told CNBC this week. "Some data suggests that 25% to 30% of claims may be denied using AI technology."
This isn't just about AI – insurers are seeking to cut costs any way they can, and denying coverage is a business decision. But increased reliance on algorithms rather than humans and emphasis on speed and lowering costs has led to massive growth in the numbers of people denied coverage for care, and either paying for it themselves or going without.
Banks are more heavily regulated and therefore proceeding far more cautiously about using artificial intelligence for decisions that affect consumers. But financial institutions are gradually deploying AI in more and more areas of their business and it's inevitable the technology will start to touch consumers. Banks can learn from what the insurance industry is doing.
AI in health insurance
"Insurers are using unregulated predictive algorithms, under the guise of scientific rigor, to pinpoint the precise moment when they can plausibly cut off payment for an older patient's treatment," said a
"Older people who spent their lives paying into Medicare, and are now facing amputation, fast-spreading cancers, and other devastating diagnoses, are left to either pay for their care themselves or get by without it. If they disagree, they can file an appeal, and spend months trying to recover their costs, even if they don't recover from their illnesses."
One doctor Stat interviewed said patients who have three months to live are forced into a denial and appeals process that typically lasts two and a half years.
Banks' cautious approach
So far, banks have been far more cautious about using AI. They use it to detect fraud and cybersecurity threats, to summarize customer service calls and to write draft versions of emails and reports.
Banks have been slow to use AI in lending decisions, partly because lending is heavily regulated. The Equal Credit Opportunity Act of 1974, for instance, prohibits banks from discrimination in the lending process based on race, color, religion, national origin, gender, marital status, age or receipt of public assistance benefits. It also gives consumers the right to know why a credit application was denied or why less favorable terms were offered.
Banks are also subject to disparate impact rules that forbid them from having lending policies that appear neutral, but have a negative impact on protected groups. Bank regulators have said banks must comply with all existing bank regulations, including ECOA, in their use of AI, and that they must provide clear reasons for loan denials.
The few banks that do use AI in loan approvals and talk about it are generally trying to approve people who don't qualify for traditional credit because their FICO scores are too low.
Verity Credit Union in Seattle, for instance, has been using an AI model from Zest AI to score applicants for unsecured auto loans, credit cards and personal loans.
"A FICO score really only looks at five or six different pieces of data," CEO Tonita Webb said in an interview earlier this year. "There's lots of other ways that we can get more information about somebody's character. Someone shouldn't have to pay for the rest of their lives for maybe a blip in their lives."
The credit union's use of AI-base scoring has led to a 271% increase in loan approvals for individuals aged 62 and older, 177% for African Americans, 375% for Asian Pacific Islanders, 194% for females and 158% for Hispanic borrowers.
That said, there is the potential for AI to be used for harm in lending, to charge higher prices to people in minority neighborhoods, for instance, or to only market well-priced products in wealthy neighborhoods.
"I don't think AI is the problem," said Kareem Saleh, founder and CEO of FairPlay, a company that conducts fairness tests on banks' AI lending models. "The problem is the incentives. Insurers have strong incentives to minimize payouts. This creates an inherent tension in claims administration that doesn't exist in lending. The incentive for banks is to extend loans profitably."
Saleh also said lending decisions have clear, measurable outcomes that allow for continuous improvement of AI models.
"We can observe whether loans are repaid or default, providing objective feedback to refine the algorithms," Saleh said. "This creates accountability and allows us to measure fairness and accuracy in a very precise manner."
Insurance claims, on the other hand, often involve more subjective determinations of medical necessity or damage assessment, he said.
"In my view, this isn't about AI itself, but rather the importance of aligned incentives and robust oversight," Saleh said. "The focus should be on responsible AI deployment rather than artificial restrictions on its use."