Insurers' values determine claims denials more than AI

Robotic hand and human hand point to holographic AI icon
sdecoret/sdecoret - stock.adobe.com

Using AI to deny property and casualty insurance claims or settle them for less may not be clear cut, and could be done by analyzing other aspects of a claim, like the claimant's personal finances.

Amy Bach of United Policyholders
Amy Bach, executive director, United Policyholders.

"From the advent of things like data mining, AI, risk scoring and the C.L.U.E. database [Comprehensive Loss Underwriting Exchange auto and property reports from LexisNexis], the more data insurers have about consumers, the more we see the risk of them using that data to exploit consumers' vulnerability," said Amy Bach, executive director of the United Policyholders consumer group. "Part of that is analyzing in the claim process, people's vulnerabilities around negotiation. Like if this person is not in good financial condition anyway, so they're desperate for money, so we're going to throw money at them, even if it's not what we owe them fully, but because they are hurting for cash, they're going to take the offer."

State regulators, through the National Association of Insurance Commissioners (NAIC), do have a Model AI Bulletin offering guidance on how to regulate the use of AI. The bulletin has been adopted by 11 states and its points used in regulation by four other states. Its provisions include an Unfair Claims Settlement Practices Model Act (UCSPA), originally drafted in 1990. UCSPA defines what claims processes and actions are subject to penalties, although it doesn't specifically cover the use of AI.

Scott Harrison of American InsurTech Council
Scott Harrison, co-founder of the American InsurTech Council.

Generally, insurers should have a governance structure for their use of AI, states Scott Harrison, co-founder of the American InsurTech Council (AITC), a group representing insurtechs, insurers and related stakeholders. That includes "good policies and procedures around understanding what it wants to do with AI, the procurement, ensuring compliance with the law and ensuring consumer protections," he said.

Ron Trozzo of SCA Claim Services
Ron Trozzo, operations director at SCA Claim Services.

The way an insurer uses AI to make decisions on claims depends on the parameters set and reflects the management's values, stated Ron Trozzo, operations director at SCA Claim Services, a company that assesses property damage claims for insurers. SCA works with commercial insurers that cover transportation, maritime and aircraft. SCA's technology includes AI capabilities. Insurance carrier users make claims decisions using SCA's input, and SCA also provides second opinions on claims when insureds dispute claims decisions.

"The AI used in claims handling will be a direct reflection of what the company's values are, what the executive leadership is thinking about, and what are they prioritizing," Trozzo said. "It's important to understand that AI has existed for a long time in this space. However, how it's being implemented has really warped and changed over time as AI becomes a bit smarter and can be applied into more components of the claims process. Our goal as SCA is to make sure that we can settle a claim, not just fast, but also accurately for the client."

AI claims models can be improved upon, as Trozzo explained. "Fine tuning and balancing of these AI models will allow us to toe the line, where AI can be in a space where a human would have made the same decision every time," he said. Claims decisions can come down to whether the insurance coverage was the right contract for the insured to begin with, he added.

"As people, we might sign contracts before reading them," Trozzo said. "They get wordy, they get complicated, they get in a space where it's very legalese, and we might just say, 'Oh, it's fine. It's just an insurance policy. Let me just sign off on it.' Until that edge case happens where you're about to get denied, and all of a sudden, life comes crashing down and it's difficult to deal with."

Human review of AI claims decisions might not necessarily correct errors, according to Trozzo. Human reviewers might be incentivized to agree with an AI decision, especially if their bonus depends on denying claims. "Just because a human is reviewing, it does not automatically mean that the right thing was done," Trozzo said. "We conflate the idea that because a human reviewed it, they must be doing the right thing."

For reprint and licensing requests for this article, click here.
Artificial intelligence Property and casualty insurance Claims
MORE FROM DIGITAL INSURANCE