In a digital-first economy, insurance carriers must deliver differentiating and superior experiences to stay competitive and attract customers. Using solutions that streamline workflows, improve accuracy, and make the insurance claims process easy and convenient for policyholders is now more important than ever.
From automated claims adjusting platforms that help desk adjusters route, manage, and resolve property claims faster to GenAI solutions that automate the claim coverage review process and facilitate faster decision-making, new technology solutions are driving a more efficient claims process and ensuring more satisfied policyholders.
This panel session will include a discussion of emerging technologies and solutions in the claims space, as well as the risks and considerations that organizations must factor in when embarking on digital transformation.
Transcription:
Scott Harrison (00:08):
Good afternoon, everybody. Welcome. My name is Scott. Is that me?
(00:17):
Oh,
(00:19):
All right, thank you. Hey, my name is Scott Harrison. I'm very pleased to be moderating this panel today. I think this is going to be a really interesting discussion about claims. And before I allow my co-panelists to introduce themselves, let, let me just give you an insight so nobody leaves before. As we save the best part for last, we're going to cover what's sort of the current state of claims and companies using digital innovation, particularly AI in the claim space, what's sort of the current state and also where things are going. We've got three folks here who are experts, longtime experience, either currently in claims or in past lives, part of their career, but still very closely connected to this space. And then we're going to talk about some of the risks that companies should be thinking about when they're integrating AI into the claim space.
(01:23):
And then we're going to close, if we have time with talking about what's going on on the regulatory front. State insurance regulators are getting very active, not just around AI involving claims, but ensure use of AI generally, and they're currently at work developing some regulatory frameworks that will have a very significant impact on third party providers of AI systems and solutions to insurance carriers. So very quickly about myself, I am here in the capacity, I'm a co-founder of an organization called the American insurtech Council. We are an advocacy or a lobbying group that is focused on the development of common sense regulatory policy involving AI and other forms of digital innovation in the states at the NEIC and in state legislatures. I've got a long time track record, about 30 years working in the insurance regulatory space, both as a lawyer and as advisor was also deputy superintendent of insurance in the New York Insurance Department prior to that in the early nineties, deputy Commissioner of insurance. So that's a little bit about me. David, you want to introduce yourself?
David Vanalek (02:39):
Yep. Good afternoon everyone. My name is David Vanalek. I'm the Chief Legal and Compliance Officer of Richmond National Insurance Company. We are a startup carrier in the surplus line space, writing small to mid-size commercial accounts for small to mid-size businesses. I'm the head of all legal, regulatory and compliance work at the company. Prior to that, I was 14 years at a large international global insurer as their chief operating officer of claims. And then I was also overseeing prior to that the professional liability claims management liability, cyber liability. So I have deep background in claims matters. And then for about 10 years prior to that, I was in private practice in San Francisco as a commercial and securities litigator. So I view this particular topic coming from seeing, evolving in trending technologies for a long time, figuring out how to fold that into a larger organization and really excited about some of the things that are available today.
Scott Harrison (03:42):
Lance?
Lance Ondrej (03:42):
Yes, I'm Lance Ondrej. I work for Germania Insurance based out of Texas regional insurer that's been around for about 130 years. So the perspective that I'm bringing to the table is I'm primarily a claims guy at heart, been in the industry for 26 years, started at Farmers, spent 15 years at Liberty Mutual, and now the last nine at Germania. So seeing the big box perspective as well as a smaller company perspective. Looking forward to discussing everything with this topic.
Sam Krishnamurthy (04:22):
Hey everyone, good afternoon, Sam Krishnamurthy. I do have a long last name, but I go by just Sam. Been in the industry for almost 24 plus years. Started off as an IT engineer by trade. I've been an all along in investment banking, the mutual fund industry and financial services. Worked my way up as an IT engineer, took various leadership roles. Currently I'm passionate about product development, leading my engineering teams title as a chief technology officer here in Crawford, and company moved to insurance back in 2008 and I've been in Crawford for almost 14 years. The one thing I'm passionate about is building products which can reimagine the end-to-end claims experience using people, process and technology.
Scott Harrison (05:09):
Great. Before I ask my first question, let me ask, I'd like to always like to find out a little bit about who's in the audience, how many folks here who are currently work with an insurance company, so maybe half and the others on the technology solution side. Provider side. Okay. And how many folks are working in or around or have something to do with claims. Okay. Pretty significant. Okay, so no, I appreciate that. So Sam, let's start with you. And it's a good reason I ask this question, right? I think we have folks who are not novices around this. Oh, one last question if you're comfortable answering this question. How many of your companies are currently using or thinking about integrating AI or AI systems into your claims handling function?
(06:13):
Okay, anybody ruled it out? No, don't answer that one. Just kidding. Okay, so alright, so we've got some that I have probably those of you who didn't raise your hand are here. Maybe because you're thinking about it or your company's thinking about it and you're trying to figure out assess the risks. One of the things that I discovered when we did our prep call last week, everybody on this call, everybody in this group thinks a lot about the risk associated with this space and comes at it from a different perspective. And so I think that'll be a really interesting part of this topic. And for those of you who are here because you're interested in learning more and probably thinking about the risks, hopefully that'll be an important or interesting part of our discussion. So Sam, from your perspective, what's the current state generally of companies integrating or using AI in claims handling claims management?
Sam Krishnamurthy (07:11):
Yeah, probably I'll start at the broader context. So we've seen enterprises use this to uplift use cases in three areas. First is customer experience, how can we improve them? Second is operational experience, and last but not least, employee experience. So diving into the claim space, I can give three examples where we have seen AI transform the entire end-to-end claims value ecosystem. Over the years we have built a lot of predictive models where the model will predict the next likelihood outcome of the claim, whether it's prone to fraud or litigation or subro. And then over the years we have also built a model to do intelligent triage, right? The more the good FNOL data you can curate from your policy holder, whether it's personal line or commercial line, based on the severity and the complexity of the claim, you can triage the claim to the right supply channel to adjudicate the best outcome for the policy holder.
(08:15):
So that's the first use case. The second use case, we are working on a product called coverage ai. This is an excellent use case to streamline policy documentation. So say an adjuster might have a claim coverage question to interpret, understand that this claim is covered, not covered or covered at the limitation. Now they can use gen AI. The product team is created a bespoke model for a particular carrier where we ingest all of their documentation, a deck page, a base policy, and endorsements. And the adjuster can basically ask questions like, help me summarize coverage for this particular claim. So it'll take all the FNOL data context and it'll go into the policy. It learns how to navigate through the policy and summarize our recommendation, not the final response because there is a human in the loop to further adjudicate and take the right outcome. Because I believe personally only humans can bring in empathy, ethics, and contextual understanding which AI might overlook. Last but not least is increasing customer experience in call centers. You all must have heard about conversational AI where AI can basically do a lot of straight through processing, identifying and qualifying a call, collecting in the FNOL information and transferring the call to the agent with a lot of information upfront. And then it'll also help the agent assist on how to handle the call to bring the best customer experience. So that's it.
Scott Harrison (09:51):
Awesome. Lance,
Lance Ondrej (09:54):
To just expand on a couple of things that were mentioned, voice analytics from a contact center perspective is very interesting as well for a couple of reasons. One would be the identification of a call in distress, meaning that there's some type of conflict or something of that nature that the AI picks up on and goes, oh, there could be a problem with this particular call based upon certain keywords that the insured or the claimant may be speaking and or the tone of their voice expressing anger or frustration. And that allows an alert to that individual's direct manager so that they have a choice if they would like to either interrupt that call or go on the floor or some type of a service observe option to be able to address that. So think about how important that is today with a lot of individuals working either remotely or hybrid.
(11:01):
In the past when everyone was on the floor, sometimes a leader could overhear that by earshot, those days are gone. So this technology is helpful in doing something of that nature. And then also my company in particular, we've been testing the automation of estimates. So there are certain use cases where heat imaging can be used across a photo to determine the severity of the damages on a vehicle and actually either start or complete that estimate. Typically this is for more simple surface or outer panel damages on vehicles where the software can identify what's going on, apply the algorithm, complete the estimate. It wouldn't be a good case scenario for a complex loss where there's severe damage or multiple parties involved, but there are some use cases where this fits.
David Vanalek (12:05):
Yeah, and just to kind of support the comments of salmon lands, again, the way I view this particular topic from the commercial lines where you have liability claims and typically litigated liability claims is what's the problem we're trying to use AI to solve? So in my world, it's social inflation, it's nuclear verdicts, which basically is the very end of the process. So then what can you do more at the front end to avoid getting to those nuclear or thermonuclear verdicts? Because I'm sure you've all seen the stories out there where verdicts are coming in, well over a hundred million dollars at this point, far excess of policy limits. So to Sam's earlier point about claims triaging and severity, you start with your operating model, building your teams to have a complex team, a mid-level team, a high severity or high volume low severity team, and then you use algorithmic AI to try to triage that eth ethanol process, get as much as you can from the front end, but don't stop there because as complaints are amended, we've seen countless times where the top three plaintiff's attorneys in a particular jurisdiction will parachute in at the 11th hour.
(13:21):
And if you've got somebody on a high frequency, low severity team, they don't know who that is. So you need the technology to be able to flag these things, elevate it so that you can meet the claim at the moment and best protect your policy holder. The other area where we see again in the litigated claim space is the large volume of materials that's coming in. If you have a complaint that's been filed and it's 300 pages long, you've got a medical demand package that's 1500 pages long. I know for a fact there are a couple outfits out there right now that the plaintiff's bar is using to spin up and generate using generative AI to generate those demand packages and send them out to carriers. So it would be the same functionality, getting your arms around it, using that technology to synthesize it and gain actionable insights so that you can best be efficient and protect your policy holder at the end of the day.
Scott Harrison (14:12):
Yeah. David, you just touched on something that I'm going to put a pin on and come back to it. And this is, and what, and this is companies needing to understand what problem are we trying to solve? What solution? Because, and this is embedded, this is a reason why it's critical, and we're going to come back to this, why it's critical that companies have gone through and developed a really comprehensive, thoughtful governance and risk management framework for AI. There's a lot of good reasons to do that, but one of them is it gives you the ability, it creates a process within your company to think strategically about your use of AI instead of just someone in the claims department coming to a conference like this and meeting one of you folks out there coming back and saying, man, we're going to acquire this. Right? And there's no process. There's really no checks and balances. It's not part of an overall strategy.
(15:19):
Having a really good framework and a process in place allows you to use or put you in a position to use AI offensively, not in an offensive way, but to further your company's strategic objectives. So we're going to come back to this because this is really layered into some of the litigation risk and some of those other considerations, regulatory considerations that are developing. But before we get to that, what's coming next? Where do we see this area developing in the claim space over the next, well, I could say six weeks and only be only a smart ass slightly, but over the next year or two near term horizon, where's this area going?
Sam Krishnamurthy (16:12):
So there are four use cases I see where I think the insurtechs are competing with others as well. I would box them, I just box them in four use cases. So the first one is embedded AI, which has been all along, right? So you've got your predictive models, you're embedding that within your products, and you see a lot of SaaS applications being built. The triaging is a good example for that, but I'm seeing a lot of gen AI use cases getting embedded within your SaaS. The second use case is you've got a lot of agents, a new role in your company going to be coming and they're already here. An agent is going to be helping an adjuster in the claims process to increase efficiency. Some people see this as a threat, but I would say I think we all have seen these quotes. The one who does not AI will not have a job, the one who knows AI will have a job.
(17:09):
So I think it's very important to use these kind of tools. I would call them productivity toolkits to help make the job more efficient. So you have these AI agents helping answer any kind of insights on unstructured data. It could be a police report, it could be a board road report or any kind of policy document. The last but not least, what I see is now GPT, it has vision, it can interpret text and generate text. It also can speak. So you'll see a multimodal large language model come into play right now. And I'm seeing a lot of companies out there are going to be creating domain specific large language models which are going to be fine tuned on their own domain data. So one use case which I spoke about is coverage interpretation against policies. We just started to scratch the surface of that use case, and we have several use cases coming along the roadmap to deal with this.
Lance Ondrej (18:08):
I think with generative AI, it's really like traditional AI on steroids. So just the amount of inflection and intuitiveness that these tools are creating is just quite amazing. And the thing about it is where as use cases with traditional AI were somewhat limited and clear. Now that can be taken to the next level where it can really start to decision some low complexity type of problems that would otherwise be utilized for a more entry level claims person, or even worse, someone that's higher skilled who would be using their time better with more complexity. But because of the nature of the role and some of the activities that are born in the claim process, they're completing those lower skilled tasks as well. So you think about it, and as Sam pointed out, I don't always look at that as a way to decrease overhead costs through decreased staffing.
(19:22):
Fortunately, if most of you are working with a company that's growing, what that will allow you to do is to better utilize the skillset that your staff has and or upskill as you grow. That way you can absorb that additional volume in those lower level tasks are being completed by these various models and tools. The last thing I'll add is large language model and the claims assistance. One of the frustrating things that I hear from claims personnel that I've worked with, a lot of companies have spent a lot of time and a lot of money on building knowledge bases that are intended to actually help claims personnel, but in the heat of the battle in the whirlwind, do they always take the time necessary to pinpoint that information? That's in a knowledge management program. And the truth of the matter is sometimes it's hard to navigate and find the information that you need to make a claim decision. So either they skip the process altogether or they may not be obtaining the information that they really should be reviewing. And it's blown me away to see all of the notifications coming through on the SaaS tools that are already in place where these AI assistance are automatically being offered usually at a cost of course, but it just improves the process tenfold.
David Vanalek (20:57):
Yeah. Similarly, I can see again in that litigated claim space with respect to whereas historically the claims individual or team would have to rely upon outside defense counsel to put together deposition summaries of the 15 depositions that have been taken regarding a particular incident at hand. They're currently tools that are maybe more geared toward the law firm side of the house, generative AI tools that will put that together, summarize those deposition transcripts for you, allow you to search for the description of, okay, who said what about that particular incident on that date at that moment to see whose stories are more consistent with one another. Maybe who's the outlier? Is there something there without having to rely upon outside counsel in the expense associated with paying outside counsel for that particular information. Again, this is kind of behind the scenes to create better efficiency and insights because from my perspective, at the end of the day, the policy holder, whether on the phone or on a video screen with the claims person, that human holding their hand to guide them through a very scary process because folks don't like to be sued and they typically aren't used to getting sued and they're not familiar with that process.
(22:18):
I remember that when I was in private practice. So the more that we can get our humans in front of the policy holder themselves and allow the AI to take care of the more mundane tasks and synthesize large amounts of information, for me that would be the ideal.
Scott Harrison (22:35):
That's really interesting. Let's sort of gradually shift, and I think Dave, that's a really good segue and I'm going to come back to you here next, back to my comments before about why it's important to have a good risk management framework. And really the answer is everybody in the company from the board on down needs to know what you're doing. They need to know how AI is being used, why it's being used. They need to know that there was an appropriate level of due diligence that was applied to your prospective third party vendors and that there is ongoing due diligence to make sure that your AI system is functioning as it was intended to function. There's a lot of focus, and I spend a lot of my time in the regulatory world. There's a lot of concern out there about bias, intended bias, unintentional bias, but I'll be honest with you as a practitioner, someone who advises carriers on the development of risk management frameworks and who started my career as a lawyer doing complex insurance coverage litigation and patent litigation, that was a long time ago, but I still sort of think like a lawyer from time to time, and I worry about not just bias but accuracy.
(24:06):
We're all aware of a couple of large health insurance companies who somebody thought it was a really good idea to start running end-to-end claims processing using ai. Probably no bias, I don't know. But guess what? There was a glitch in the system and they overprocessed what millions of claims or millions of dollars, it's a nightmare.
(24:30):
So the company has an obligation to be in a position to confidently know that it is exercising due diligence. And the only way you can do that is to have a really good risk management framework and governance, a really robust governance structure in place. Okay, so now you have that, David, what are the litigation risks that companies ought to be thinking about? And by the way, my advice to my client is always you need to have your general counsel's office at the table. They need to be part of your AI risk committee or whatever you decide to call it, because you need to be thinking through these issues while you're developing your policies and procedures. So the company's done all that and they still get sued, right? Or potentially get sued. What are the kinds of things that would keep, not you, but another hypothetical general counsel awake at night concerned about risk?
David Vanalek (25:32):
So there's a lot in that question from that perspective. You're right. So building a broader artificial intelligence system program into your risk management framework, and for guidance on this, the NAIC last December put together this uniform AI bulletin to be adopted by the various states. Currently there's been about a dozen states that have adopted it. Nebraska was the last one back on June 21st. DC recently adopted it as well. But in there it lays out these very clear principles to mitigate the risk that you were describing of how do you mitigate against adverse consumer outcomes, which amplifies bias that maybe your model was trained on initially.
(26:21):
And then it talks about building AI risk into your enterprise risk management framework, building into your corporate governance structure. How do you report that up to your board, building out an acceptable AI use policy? How are you basically contracting with your vendors who are offering that technology to allow audit rights to take place with respect to their technology? And if you dot all your i's and cross all your T's with that, yes, at the end of the day you still might get sued, right? And so at that point, I mean what you can do from a defense perspective is point to the framework that it wasn't a situation where you didn't take into account those risks of hallucination. For example, there's a very famous case last year, law firm out of New York was litigating this particular matter. They asked chat GPET for case law supporting their particular position.
(27:14):
It came up with 12 cases, beautiful citations. All of them were false. So the defense got the brief. The judge was like, wait a second, none of these cases are real. And in the motion for sanctions, they said, okay, plaintiff's counsel, what did you do to verify that these 12 cases were correct? And they're like, well, I asked Chad GPT if they were right and it said yes, what else is it going to say at that point, right? So it goes back to yes, they got sanctioned. And those are the types of situations where if you don't truly appreciate the risk of the hallucinations, the bias, and then I'm not even going to go into data security and privacy because that's a whole another piece, whether you are using an open system or closed system, but recognizing that if you're putting your company's client's confidential information out into an open system, that large language model most likely is training itself. And that's going out to the world. And that's a hard stop.
Scott Harrison (28:12):
So land's first you then, Sam, you guys are on the operation side. So David's your lawyer. You've heard what your lawyer has said, but you guys are responsible for making it happen operationally. So what are the kinds of things you think about taking your lawyer's good advice and thinking about the risks, but at the same time, you've got to put this thing in motion. What are some of the things you're thinking about or the folks who are maybe considering using AI ought to be thinking about taking their lawyers' advice, but yet operationalizing it in a way that meets the business objectives of the company?
Sam Krishnamurthy (28:57):
Yeah, probably. I'll take a stab at this. So of course every enterprise has to adhere to their policies, which have been carved out by the organization, and I would say always keep customer consent, data privacy and security on the forefront, right? Passionate about building products. So what me and my business stakeholder always do is we have all the sensitive conversations right on the day one of the conversation, Hey, this is our idea, this is what we are trying to ideate. We are going to take this from here to MVP to production, and this is what our roadmap looks like. Now let's talk about the processes because nobody has deployed a Gen AI product in production, maybe very few companies with the exception of open ai. But this is a new process altogether, and we have to do constant education and change management within the organization.
(29:49):
And that starts with our legal team to begin with because I think everybody is scared about chat GT and all the hallucination and stuff like that. But if we do things right, it could be successful. And obviously there's a lot of exceptions we have to deal with and we have to carve out those processes. So whenever we talk about creating a bespoke model where you fine tune a large language model, it talks about creating a knowledge base from a customer's tenant, if you will. So we have to make sure that whatever product we are building, is it built on the right standards, whether it's in the cloud, whether it's Azure, Google, or AWS. So we have to make sure that the right data residency, the right security, and the right SOC two type two controls of your platform is well architected. It goes back to architecture.
(30:40):
And then after that, we have to cover out our policies. Now we have to work on the client consent process with the customer and the producer who's building the product. And then it goes about explaining how the Genea application is going to work. That's a change management process. Don't just trust the response which GPT is going to provide you, but it should have some groundness of truth, it should have a confidence score, it should have a citation to support to the source of the document, and they should be a human in the loop where you can give feedback to the model where the model can learn and evolve. So the way we are trying to build applications is you can give feedback with a thumbs up and thumbs down option. It could take in comments. There is a detailed visualization and a reporting run at the backend.
(31:25):
There are guardrails which are built within the GPT at the backend. I can go on and on and talk the entire day about this, but this requires a very thought leadership process when it comes to building products and working closely with the legal team. The data privacy team is mission critical, but most importantly, which is paramount is working with the customer because none of our customers, even our customers are going through the change manager phase. And it talks about educating them, talking about the value prop of the product and then talking about the audits. They're also eligible to do on the product side as well.
Lance Ondrej (32:03):
So just a couple of things to add. If you've not asked yourself or your organization as an ask the question, who's already using unsanctioned AI that maybe you as a leadership team isn't aware of, some of these options and tools are already available online at no cost, and that's where the human nature piece of it kicks in. If a staff member can find a way to make their job more efficient, they're going to test it out and you may not even know. So I do think it's upfront as we're all trying to figure out a governance model that will work. And once it is tested, because it will be in the meantime, it's make sure that you're establishing some baseline policy to ensure that you don't have staff members on a rogue basis already using some of these tools. It could lead to an inaccurate outcome, but also it gets back to the point about all of this information being available publicly, which may or may not come back to haunt you at some point in time.
(33:18):
Something else I think, and Sam mentioned it, is just where is the industry and truly adopting the technology versus considering the technology? There's an extreme amount of excitement around chat GPT and its capability that may lead some to think that they're already behind and they should rush into it. I know the approach we're taking in our organization is we're really having discovery sessions with both our business technology team as well as business stakeholders to truly understand the use cases and how they might apply. But in hand with that, we also had members of the compliance and legal team who were thinking about these use cases and how the governance structure should apply to that. And last but not least, on the potential for litigation. But the concerning thing, at least to me, especially in a claims environment, that's a high transaction basis. Once these tools are put in place, if there's unintended consequences that come out of it from a biasness standpoint and things like that, we all know that plaintiff's bar will find these opportunities and file suit and name everyone. And in these cases, because it's being applied through either a large group of claims or maybe even all claims as it's passing through the technology, those are going to qualify for class action. And even if you have the opportunity to defend it, how long are you going to keep it in litigation and how many hundreds and millions of dollars are going to be spent to try to defend that case.
Scott Harrison (35:14):
Lance just described, he may not have done it intentionally, but Lance just described the precise, the precise way a company needs to go about understanding and managing its risk across the enterprise. I've had clients who start the conversation about this with saying, well, that's for the IT department to handle, right? Lance says for the IT guys, right? No, it's not easy to do that. No, it's not. Unlike cybersecurity, which arguably is more of a vertical risk. This is a horizontal risk. And in my experience, some of the most difficult people to convince this of in the company are the CEO and some of the senior leadership in the company, they don't understand the technology they're used to. I don't want to know about the IT stuff. I don't understand it, I don't get it. They're almost as bad as dealing with actuaries. So just have them go off and do their thing and you tell 'em, no, you can't do that because there is risk here. There's also opportunity here and a strategic opportunity for your company if you do this right? So I want to really wanted to reinforce that. And if it sounds like I could get on my soapbox on this is I have a lot of experience with clients doing this.
David Vanalek (36:39):
And can I just make one comment too? Because one of the other things Lance mentioned too was inventorying those AI tools that are being used across all those cross-functional teams. So it's not enough just saying, okay, who's using it? It's actually utilizing it, scanning for that, and that's actually explicitly called out in the NAIC bulletin is inventorying those tools so that you can then identify what the outcomes of that, the use of those tools looks like. And then you have to do periodic bias audits of those outcomes in order to lay a good traction and groundwork of, no, it's not creating a disparate impact. It is meeting the spirit of the NAIC bulletin back in August of 2020 when the NAIC really started going down this path. They put out their guiding principles, which talked about the use of AI by the insurance industry, had to be fair and ethical. It's got to be an accountable system. So folks all the way to the top, including the board, are accountable for its usage compliance with current law transparency so that there's not this black box that no one knows how these decisions are made. And then it had to be safe, secure, and robust. And if you hit all those points and these two gentlemen, definitely you hit all of 'em. Then from general counsel's position, that's a home run
Sam Krishnamurthy (37:56):
From a practicality perspective. I'll just translate that a little bit. So I think in any ML model, whether it's traditional machine learning when data science or if it's a gen ai, I think model observability and reliability is key. How did the AI make this decision so that a human can foster trust and accountability? So I think there are various techniques where we provide those visualizations so that the human can trust the decision. If the decision is wrong, then the model did not encounter that permutation or combination, if you will. It'll provide you a low confidence score where the model will prompt for the human to give feedback and the model will learn and evolve. The other side of this to your question is I think doing a third party risk assessment is an excruciating process. Some companies do it in weeks. Some companies take it for six months. Even for Microsoft copilot, you have to do A-T-P-R-M assessment. Just because a big company release it doesn't mean that you have to use it because there's a lot of guardrails to know how to use it, how to configure it. Just imagine asking a question on teams where it could pull information from both public and private channels that could put you in a reality bad spot. So I think the way you configure even a copilot involves rigorous security and training and knowledge as well.
Scott Harrison (39:23):
So this is awesome. The few minutes we have left, let me pivot real quickly. David gave a good introduction to what's going on in the regulatory side. David mentioned the AI model. AI bulletin, NAIC developed that they adopted it in December of last year. It's not a law, it's not a regulation. It is a bulletin that states are adopting, I dunno how many states we're up to now, six, 12, about 12 and states are rolling it on. But what they did just really at a very high level, they didn't make new law. They said, look, we already have existing bodies of insurance law are unfair Claims Handling Act, unfair Practices Act and others including our enterprise wide risk solvency assessment requirements. So basically the bulletin says this is how the use of AI is being applied, or those laws are being applied to your use of AI.
(40:30):
They use a risk-based approach, which requires the company to go through this risk assessment process that we've all been describing. One of the thorny parts of this that the NEIC is beginning to work on right now, and if you are, well, whether you're with a third party or insurance carrier or a third party provider, it doesn't matter. But heads up, right? Here's the punchline regulators. A key principle of regulation is transparency. Regulators can get access to any piece of information in an insurance company or in its entire holding company system that they want to, might not be easy all the time, but they will eventually get it. So they're confronted now with this dynamic of companies using a technology that regulators don't understand. Some of them have been convinced that these algorithms are inherently and hopelessly biased. And so they're also dealing with the reality that the third party vendors, the people who provide this solution, are not accustomed to operating in a regulated environment.
(41:47):
So what they're doing right now is working on a framework around how are we going to regulate third party use and development of AI that's being utilized by insurance companies. There's a whole range. Happy to talk to anybody. My colleague, Jack Free was sitting over there, happy to talk to anybody to give you a little bit more information. But they developed a work plan, a draft work plan, and a pretty short list of the options. One of the options is requiring third party vendors to be licensed and regulated directly. We happen to think in American insurtech Council, that's a pretty bad idea. We have our own views on a better way to go about it, but that's what's at stake. If you feel like there's a bullet coming right at your company, there is and there's an opportunity to get involved. There's an opportunity to play a part in discussion at the NEIC.
(42:44):
If you're on the third party vendor side, if you're the carrier, you also have an obligation because you're going to be held responsible by your regulator for what your third party vendor does. If your third party vendor escapes regulation, now the bullet is coming at you and saying, we didn't know, or we took their word for it, or they told us they were testing. So real quickly, and we've got a very short period of time, let me just ask the question. How important is it for carriers to know exactly how the models are working and doing the ongoing due diligence yourself, even after the model goes live in your company?
David Vanalek (43:34):
I mean, I'll take that one because I think the bulletin offers that guidance talking about that the carriers will be subject to a market conduct examination on this very topic. So going back to the comment Lance made about inventorying those tools, creating that framework, having cross-functional teams that are periodically reviewing it, measuring and conducting periodic bias audits on your outcomes to ensure that you're not having adverse consumer outcomes. I think all of those pieces being well-documented, invisible and transparent. When a carrier is then subject to a market conduct examination by the regulator, they have something to show, look, this was all this principle-based guidance that you had provided through this bulletin, and it's applicable to us and we're doing the best that we can to try to adhere to it, and here's our assessment of the situation.
Scott Harrison (44:27):
And so that same body of work is also essential and relevant in a litigation context, right? I mean it the same anytime you have to defend your use, it's the same material. Correct. Lance, Sam, last word,
Lance Ondrej (44:40):
Just real quick. We're talking about some things that a lot of carriers have really not given a lot of real consideration to. And so the question becomes, do you have people in your organization that can take on some of this responsibility and do they have the skillset? And so with that in mind, as you're working to create that governance structure, it's really important to address those questions even if you have to consult with an outside source to be able to size that and better understand it.
Sam Krishnamurthy (45:20):
I would advocate anybody who's using AI should follow the NIST risk management framework. It pretty much gives you a very good framework on how you should manage ate your risks, and this is a constant analysis which the product team will need to do maybe on a quarterly basis. Monthly might be an overkill, but quarterly might be a good process to carve out because I think these decisions, whenever you're taking on claims, even though AI is giving you a recommendation, you still need to have your human empathy, ethics, and contextual understanding to evaluate AI's outcome, to make the right adjudication for the claim and not to forget. One last thing to point out, I would always say, even when somebody is handling my claim, I like to talk to a human at the end of the day because that's the ultimate customer experience that a human can provide. All the AI stuff is great, but still I would say that it's a critical element in the entire process.
Scott Harrison (46:27):
Guys, thank you very much. I think we have to wrap it up here. I know I'm going to be around the rest of today. Sam and Lance, think you guys can be around. David, I know you've on a plane if you have questions. I apologize we didn't leave time for questions. Feel free to stop me or Lance or Sam with any questions. We'd be happy to talk to you. Thank you very much for coming. Appreciate it.
Driving Efficiency and Customer Satisfaction: The Role of Technology in Modern Claims Management
July 26, 2024 11:59 AM
46:55