Mitigating Regulatory Risk Through Responsible AI Frameworks

Innovation Platform, Room Two (Mizner Center, Grand A-F)
Insurers are deploying and will continue to deploy artificial intelligence systems in many aspects of delivering insurance products. From preventing fraud to optimizing claims, underwriting, and compliance, AI systems are likely to become increasingly indispensable, but also increasingly scrutinized. Insurance regulators in the United States are already starting to issue concrete guidance or even force-of-law regulations about expectations surrounding AI data usage and, more importantly, governance. These developments compel anyone working with AI systems to think carefully about whether a carrier has an effective framework in place that well-positions it to not just effectively deploy AI systems in its insurance products, but also to maintain such systems internally and prepare to communicate with stakeholders in a meaningful way.
  • Company-wide (aka – not siloed) AI governance frameworks will be the key to maintaining nimble, compliant AI systems
  •  Insurers must be prepared to answer questions from regulators surrounding how it deploys AI
  • One cannot simply explain results without process - mitigating risk is key

Transcription:

Evan Daniels (00:08):
All right. Well, good morning. Thank you for coming to this mid-morning session. Appreciate you being here. My name is Evan Daniels. I'm an attorney. I work at the law firm of Mitchell Sandler. We're a financial services focused boutique law firm that works with financial institutions, fintechs, insurance companies, insurtechs on primarily regulatory matters. Yeah, come on in. I'll cheer for anyone who comes in, but really glad you're here. I've been joking with people who have asked me, oh, what are you speaking about? Well, I'm going to be the wet blanket on AI. Not really, but we are going to talk a little bit. If you were in Terry's keynote a few moments ago, he showed that pinball machine and you had finance and legal at the end. It's like, hello, we're that legal team that does this? But I don't think so. I actually love Terry's presentation.

(01:10):
I thought it was great. I think what we're going to talk about actually compliments what Terry had to say, although we're of course focused on a pretty discreet area related to all that. But ultimately the goal is to enable things like AI to make your business better. So that's what I try to help companies think through as they approach their unique aspects of this. And to my left are two general counsels who are actively doing that in their companies. So I've introduced myself. I'm going to stop talking for a moment and let each of them talk about themselves for a little bit, and then we'll jump into our discussion about AI regulatory risk and how to deal with that in the current environment. Rachel,

Rachel Jrade-Rice (02:00):
Good morning everybody. Do we have any lawyers in the room? One? We have one.

Evan Daniels (02:06):
All right.

Rachel Jrade-Rice (02:07):
Okay, so the rest of you're super brave. No, we appreciate you coming to the panel this morning. My name is Rachel Jrade-Rice. As Evan said, I am the General Counsel of Next Insurance. All three of us on this panel are also former regulators. We all worked at various departments of insurance. I worked at the Tennessee department as assistant commissioner before coming to next, and I joined next in December of 2020. So I'm rounding the corner to four years at the company.

Scott Fischer (02:38):
Great. Can you actually all hear what we're saying? Okay, cool. Me. Okay. I will speak up more. No problem. So my name's Scott Fischer and I'm at Lemonade Insurance Company, Lemonade and Metro Mile Insurance Company, lemonade Insurance Company. I've been there for actually two years right now. And as Rachel mentioned, I am a former regulator. I was in charge of insurance at New York DFS, and then went back to private practice for a bit and came to insurance. And what we do at Lemonade is exactly what Rachel does and what Evan helps all of us do, which is really try to bring Mary AI into the insurance space in a way that it hasn't before, but will be in the future.

Evan Daniels (03:23):
Great. Well, we want to talk about how Next and Lemonade are using AI in their companies and what governance looks like for them, this kind of case studies.

(03:35):
But I thought I'd first start by setting what the current regulatory context is and why does it matter. Here's why I think this conversation that we're having today matters for all of you out in the industry right now, and that's because AI adoption, all the things that Terry was talking about a few moments ago, that's going to continue. It already is, but it's going to continue to outpace the regulation that follows. Regulation is always going to tail what the industry does. We all kind of know that, but that creates some particular distinct problems for all of you out there, which is that oftentimes you're implementing tools, AI tools, into your products and services without really knowing whether or how this is going to be regulated, what the rules surrounding it are going to be. You're doing it ahead of time. And so the risk is that you are going to have to backtrack a little bit or modify, and that's of course expensive or in a worst case scenario, puts you at risk of things like enforcement actions.

(04:43):
That's not going to change anytime soon. Just to give you a brief history of the regulation in this space, and when I talk about regulating AI, I'm not really talking about regulating ai. I'm talking about the insurance regulations that are specifically applicable to how AI is deployed in insurance. That's maybe a lawyerly distinction, but kind of an important one because obviously when I'm talking about regulating AI in this context, that wouldn't necessarily apply to a different kind of business. But in 2020, I believe it was the National Association of Insurance Commissioners adopted their AI principles. That was the very high level first crack at the regulator saying, here's what we expect out of you insurance companies that we regulate as it relates to how you use ai. That was 2020, that was four years ago at this point, just last year. So three years later took three years to go from principles to a more detailed model bulletin that actually outlined some more specific things about what the regulators would like to see and expect to see.

(05:58):
That's not a law, but 12 states in the six months since have adopted it, 12 out of 56 jurisdictions in the United States, that's the states and the territories only 12 have adopted this far, and it's taken them six months to do so. The point here is that that is relatively fast for regulators. I don't know if they're nodding their heads for you all. I can't say that I've really heard any comments in any of the sessions I've said in where the outlook was beyond 18 months. That makes sense. You all are moving at a much faster pace, but the regulation is going to be tailing. So you have to keep in mind constantly, where is this headed? Keep your eye on it because it might end up forcing a shift, whether slight or big, you don't really know. So in three to five years to kind of give you the punchline about why this is important in three to five years when the actual rules are promulgated, you want to make sure you're positioned in such a way that you don't become the enforcement test case. That's what I think ultimately this topic of governance is designed to get companies thinking about how to avoid being that enforcement test case. So with that, I want to turn to probably a much more interesting discussion, which is how next in Lemonade have approached this topic? And I'll let Rachel and Scott talk about that. But can you tell us some more about how AI has affected the way you deploy insurance and service your customers respectively in your companies?

Rachel Jrade-Rice (07:43):
So I think often when we talk about ai, we use it as this broad term when AI actually encompasses a number of different types of artificial intelligence, and sometimes it's even used to describe machine learning. I have a little bit of a different experience than Scott where I came into next relatively early in the whole process of the adoption of these technologies at the company and was quickly able to understand that we needed to develop a governance framework around this. Colorado's law, which is much more prescriptive than the NAIC bulletin was passed in 2021, I believe it's SB 169. And that was kind of the industry's first signal that's like, okay, regulators are getting really serious about looking at this. And so you start thinking through the process of how do we handle, first of all, regulatory fears around these technologies and also implementation within the company to enable the company to continue to move really fast because speed is everything.

(08:55):
In tech companies. Like Evan said, your outlook is a year to 18 months most of the time, and not much further than that. We do large level three to five year financial planning, but in terms of product launches, we're not that. We're not planned that far ahead. So when I joined next, we were using what I would call narrow AI, which is it's an closed algorithm that's essentially trained on a data set it doesn't think for itself. And it was really interesting to hear the regulators starting to get worked up about artificial intelligence because if you understand how closed AI works, there's not really too much to be worried about because it is trained very well. But nevertheless, you have to inform the regulators of what's going on. And so we started out with using our own governance processes that we have within the company that were already existing, right? Things have to get approved by legal before you launch a new product and you catch issues in the making. So it was relatively easy for me coming on board to explain to the company what the future outlook was going to look like and how we needed to start early to do these checks.

Scott Fischer (10:24):
Yeah, totally different experience. So when I led insurance regulation for DFS, one of the first things that I got was Lemonade. They wanted a license and we went back and forth a lot, many, many, many times. And eventually, I don't know if it was a good idea, but I gave 'em a license and so many years later ended up up at the company. Lemonade from the beginning has always been about ai, AI's always been baked into its DNA, it's sort of, its maybe defining characteristic or a defining characteristic. So it's always been there. It is closed as Rachel said, but it's always been there. It's been there for a couple of reasons, mainly for ease of use. So really to be able to interact better with the customer than a traditional insurance company can. Not that traditional insurance companies, they interact with customers just fine, but we can do things a lot more quickly and frankly a lot more cheaply.

(11:23):
So our first product was rent insurance, and because we could use AI and leverage technology, you could sell it. Our minimum premiums were far lower, far lower than the other competitors. That's partly because of AI. So from the beginning it's been being into what we do alongside that because of the regulatory environment, the ways in which you deploy it are going to be a little bit more narrow than in an unregulated area, but it's always been something that we've been doing from the get go. And we can talk a little bit later about the governance process. I think given that lemonade started in 2016, and as Evan said, it didn't really hit the regulators radar in a meaningful way until another four years later or so, there's a lot of stuff going on. But in between that time and today that now we're sort of going back and looking at how do we do a better job at our own internal governance, not that there was anything wrong with the previous one, but it was a little less, it didn't anticipate the scrutiny that we are seeing and will see in the future.

Evan Daniels (12:41):
And I think if I was going to summarize governance, which is a topic that we're going to cover in more detail, the way I like to think about it is governance is ultimately the story that you tell about how you deploy ai. That's really what it is. And that's something that should, in my opinion, permeate the whole company. If you think about, I'm piggybacking a lot off of Terry because I think his presentation did a nice job of summarizing it company-wide. But my advice would be that pyramid of people all coming together, governance should be a part of everybody's understanding about how AI is used. It doesn't have to be technical. You don't have to be an engineer to understand how to talk about the way you deploy ai. And certainly the regulators aren't going to understand all the ins and outs of the technical aspects.

(13:34):
But if I had to summarize governance that way, and I'm probably getting a little bit ahead of myself, I wanted to, before we get into governance and how it works for next and Lemonade, I thought it might be interesting to do something of a backwards looking good, bad, ugly of how AI has been deployed in your companies. From an outsider's perspective, I think the good of ai, the promise of AI in many ways is that it creates greater efficiencies. Scott, you talked about lower premiums because of the way you're able to use data with these AI tools that leads to better customer experience, all the things that we hear about regularly. I think also a good thing up to this point is that the regulatory discussion has stayed relatively high level, which means it hasn't gotten too prescriptive that has allowed companies to start using these tools without too much interference. Although maybe that's starting to see a little bit of a shift there from a, I might say this is bad, but Oh, yes, sir. Oh, there it is. Yeah, thanks.

(15:02):
So I think I heard your question is have the regulators had to step in without naming, without naming names? In other words, has there been an enforcement action as it relates to AI? Yeah, I am not aware of any public ones at least that have gotten into the details. And that is what I mean by the regulators' interest has remained relatively high level. They've stuck to principles based regulation, they've stuck to things. There are, Rachel mentioned the Colorado law, they just promulgated some rules for life insurers that relate to governance that went into effect I think this month actually. So those things could be coming. I would expect that they are, at some point you will see a public

(15:53):
Issue with the way a particular company deploys AI, but that's still, I think out in the future we don't have, and that's one of the difficulties is that we don't know, and this was what I was going to say about the bad of AI implementation so far, is we don't necessarily know what that's going to look like. We also don't know that we can always stop the things that we all agree are bad from happening with the way some of these products are deployed. Maybe one of you will disagree with me on that. I think.

Scott Fischer (16:27):
I don't disagree at all. I don't think there hasn't been an enforcement action, like in the market conduct sense where we're going to find it that I'm aware of actually pay attention to this stuff. So I don't think there has, what there has been, I think there's a little bit to extrapolate a little bit more on your question is I think the technical term we freak out of regulators when it comes to using it in pricing underwriting.

(16:54):
So while there's not an enforcement action that says, well, we are going to go after you because you did it this way or did it that way, you can't use it in pricing, you can't use it in underwriting unless you file that model. And a lot of the states either will have, I'd say they prohibited it, but effectively prohibited it because the time, the effort, the energy that goes into looking at them, we want this information, we want that information. We don't want to give you the blah, blah, blah, blah, blah has really sort of put a huge damper on the ability to use AI in the areas of pricing and underwriting. There are certain exceptions. I think Texas has been very open to understanding what the models look like and allowing for their use. But other states, and it will come as no shock to anybody here. And I would say states like California and New York are much, much, much less open to hearing or looking at those models. So the direct answer to your question is no, but the more I think perhaps useful answer is because of the reluctance or the fear of allowing these things into traditional areas.

Rachel Jrade-Rice (18:15):
I think also we have to keep in mind that these regulators don't have a lot of data science resources in house. Those positions are first of all very expensive to hire and the departments have not yet understood that they need to attract this talent to be able to understand and keep up with what insurers are doing. Scott and I actually visited the Georgia department in early May, and during the course of the conversation with Commissioner King, they asked for assistance in helping to draft a job rec for a data scientist because they have an understanding that they need someone who can speak the same language as these companies. And that's not happening right now. And some of that candidly is good, right? Because it allows us to continue doing what we're doing and we can sell a good governance story and everybody feels okay, but part of it is bad because then it does create a freak out, and there's a fundamental misunderstanding about how these technologies are being used and what the real danger of them is.

(19:25):
And I agree with Scott that the issue of underwriting has come to the forefront and bias related to underwriting, which insurance companies are already subject to unfair trade practices laws regardless of what you use for underwriting, whether it's a human or it's ai. But nevertheless, we've had a lot of tension over this right now, and most people have not filed LLMs. Actually, I think my team has come to me and said, can we do an LLM for underwriting? And I've said to them, sure, but we have to figure out how we're going to file it and how we're going to make sure that it's constrained to fit within the schema, the regulatory schema that's already there in terms of bucketing for percentages that you can use for variables and things like that. But also your major challenge is getting your regulators comfortable with it because they don't have this experience.

Evan Daniels (20:25):
Yeah, and I think maybe what I would describe as the ugly part of all of this thus far is that the requirements related to data, not just AI specific, but as it relates to data privacy, cybersecurity requirements that you all have a lot to keep up with. And that can get difficult when you're moving in different areas and you have all these different jurisdictions that you have to deal with.

Scott Fischer (20:53):
I think just to follow the good for us as included in AI as part of what we do, and AI has allowed us to move more quickly to ensure people that would otherwise not be looking at insurance. We attract a very younger cohort. That's all been good. Hopefully. The good part will also be that our rationale is that AI can allow us to be much more granular in differentiating risk. All men, we all agree, all men are lousier drivers than all women.

(21:31):
Okay, fine. But we can do better. AI can allow us to say like, well, not this man and not that woman. And hopefully that will, the challenge is getting regulators more comfortable with the fact that we can do that at all. The ugly, I think for that, Rachel has an experience that I have is data scientists are brilliant. They're not so good at showing their homework. And so what coming in where I've come in is to get them to show their homework and understand how to really express what they're doing in a way that is more understandable to regulators. And I think the ugly part for us, and maybe there's one lawyer here, but everyone else is in some other area of insurance is the ugly part for your, we said is trying to explain, or everybody that's in claims and everybody's in underwriting pricing, they want to know what the rule is.

(22:29):
There ain't no rules. And so the really ugly part I think for AI right now is that there aren't real rules. There are rules, but they seem to be changing and they don't seem to be changing in the way that they're written. They just seem to be changing in the way that they're interpreted. And so a lot of the ugly, I think of AI is matching what the rules are with where you want to go with where we think regulation and the society is going to be in the future. I think that's probably the ugliest part to be, as I'm sure Rachel would agree, trying to explain to claims folks or pricing folks. Yeah, I know that there's nothing that prohibits us from doing that, but we need to thinking about how it's going to play, what is it going to look like? Because the rules as they all stand today, are really changing, not just momentarily, but state by state. So I think probably what I've found would be the ugly part of it.

Rachel Jrade-Rice (23:30):
I think Scott's talking about internal stakeholder management, and I think that's a really difficult area when you're a general counsel in a company. And part of it is that these technologies are built to for the ease of use of the customer. So next has essentially instant bind for its commercial products, which means you go through a funnel, our funnel takes about 10 minutes max depending on how long you have to think about the answers to questions and we can bind instantly. And the reason that we can do that is because we do use artificial intelligence and technology on the backend to validate information about the customer to make sure that they're saying everything that they need with the specificity they need. So essentially, if a customer comes through the funnel, they will give us information about themselves and then we will actually pull from their web presence and other areas from Alexis check information about them instantaneously that ensures that they get the right coverage and the right price for their operations.

(24:43):
And this is super important because you don't want someone coming on the backend on the claim side and having an issue because they didn't have the right coverage. So when you're talking to your internal stakeholders and you're telling them like, sorry, you have to constrain this or You can't use this way, you think you want to, their answer is why? Because it actually benefits the consumer. So we have a little bit of this push pull right now. I would say the bat of AI in my opinion, and Scott and I both do government relations for our companies as well, and I try to talk to regulators about this a lot, but the worst part about AI is humans. We train the models. And so one of the things that I talk about internally with my company, which this may not be a very popular thing to say, but diversity in your data science teams are important because you have humans who have diversity of thought and can catch issues when you're training your models. And I think that's one of the first line preventions that you can do to ensure that your models behave the way that you intend them to. And I think that the ugly is, I agree with Scott, it's obviously trying to have a crystal ball. Nobody knows where regulation's going right now and attempting to try to thread the needle for your company is incredibly difficult.

Evan Daniels (26:11):
So with all that in mind, let's shift to maybe talking more about what all that means for the story that you tell and try to get your internal stakeholders on board with understanding as it relates specifically to governance. And I think there are a couple questions that come to my mind. Should governance be uniform? I mean generally I would say probably not every company's different. Both of your companies have very different customers. So governance probably shouldn't look exactly the same, but it seems like there are maybe some principles that likely are going to overlap irrespective of what your customers are doing and what they need insurance to do. So to the extent you're able to share, what are some of the ways you think governance is permeating through your organizations and where do you see that going? You can go ahead, go for it.

Scott Fischer (27:14):
So I think I agree it's definitely not, it's not one size fits all.

(27:21):
I think it's one size fits many. So there's going to be a lot of health is very different than PNC is very different than life. But within life you can have a lot of similarities within PC, you can have similarities, PNC, then I guess subdivides, personal lines versus commercial. So there are things, but actually a personal lines is going to be way more, at least it shouldn't be way more concerned about what is or is not unfair about unfair discrimination. There's also unfair bias and the insurance law, by the way, what is unfair discrimination, whereas commercial not quite the same thing. I mean it's there but not quite the same thing. So I think that's true. But to answer your question, Evan, I think the things that are absolutely key and the things that NAIC got right, was the need to have this, and Rachel alluded to this, this need to have a broad spectrum of people involved.

(28:25):
So when we developed it, it's clearly the business folks, it's the CEO, it's the people that run product, it's also the lawyer, it's a we at Lemonade, we hired a part-time ethicist who works full-time at Google, but she's an ethics advisor, so she's involved, it's the data science folks, it's the internal audit people. So I think, I think that would be a theme or a commonality regardless of industry of your sector to have the right team there. And the team includes people from different disciplines. And if you can, certainly for a company that does personal lines, as Rachel said, you can have diversity of experience that's even better. I mean really to golfing a little bit of a tangent, but trying to Once upon a time, my dad worked in New York Department of Corrections for years and years and years and he would've people come in to sell, sell security products.

(29:30):
One of the guys came in and he had this picture and had this snarling dog on, did not like the snarling dog. Yeah, had the snarling dog on it. And my dad's comment was, you don't really want to use that. And the guy's like, what are you talking about? He said, well, that's going to be very offensive to some people. Why? Well, snarling dogs are associated with the civil rights movement and this guy had no idea what he was talking about. Never intended that whatsoever. But when you have people who have different life experiences and they can see things that other people can't, and maybe you ignore it, maybe you don't pay attention to that as much, but having the ability to understand what it looks like to be sensitive to it is really important. So I think having the folks around the table that are sensitized to it that understand is key, number one.

(30:28):
Number two, and then I'll shut up unusually, but I'll shut up, is following at least a standardized methodology. And I think one of the things that NAIC alluded to, they didn't, I think they should have gone a little bit farther, was the NIST standard out there. Whether you like it or not, it's going to be the sort of the gold standard. I think it works perfectly fine. And it's the measuring and the monitoring and the risk assessment. So doing all those things of what are we trying to get at? What's the risk assessment at Lemonade, we started with what we call our North star. What are we trying to achieve with this? And once we do that, everything that kind of falls into place, how are we getting towards our North star in our governance? And those sorts of methodologies I think will hold the company in pretty good stead.

(31:18):
I think that's always the key I think, I don't know, has to chat.

Evan Daniels (31:22):
And one of the things that I also like to compare governance to is everyone understands the importance of a cybersecurity response plan when you have an incident. Now of course, cyber is fast paced. Usually you have an incident happens, you got to respond immediately. AI governance is not exactly that, but it's the same idea. You want to have that plan in place, that story to tell so that if there is an inquiry, if there is something, if questions are coming in, you've identified in advance, here's how we explain this, here's how it's put together, everybody's on the same page about that, and we move forward from there. In the absence of rules, that may be the best you can do in the interim. And of course that will be I think part of what you do. I suspect even after more potentially, hopefully not two prescriptive rules are implemented on this.

Rachel Jrade-Rice (32:24):
So I think North Star is always where you start. A lot of people think that your general counsel's here to shove rules down your throat and not pay attention to the needs of the business. But what we should be doing is being right in line with the company's mission. So the first thing I did when we were implementing what I would call kind of like a tangential specialty governance mechanism for our artificial intelligence, because in large part because of the NAIC bulletin is that I went to my chief product officer and my chief technology officer and I explained to them, here's what we're subject to and here's what we're going to need to do. Talk to me about what you think given these restrictions, our guiding principles should be. And we always start from there. And whenever we roll out a new iteration of AI or implemented in a new area of the lifecycle of the insurance product, we go back to those guiding principles. And a part of this, a little subset of it too, is an AI use policy, which is super important for every company, especially if you do use an enterprise open AI product. When you have open AI I would highly recommend that your employees not be allowed to put I or any other type of information in it. And you might think that that's intuitive to people, but it's often not. And so we have a use policy with very clear, acceptable uses, some don'ts, and then a process for evaluating whether a use process is acceptable.

(34:04):
The next piece of it for us is what Scott alluded to, which is a cross-functional committee. We have an AI governance committee that meets quarterly, or if there's a product that's rolling out and they need an answer sooner, we will meet sooner than that. And what we do is that we evaluate all the projects that are coming through. We all issue spots. So we have our VP of risk and reinsurance on the committee. We have obviously our chief technology officer, our chief product officer, we have our head of data science. We're actually very lucky at next because one of my heads of machine learning is actually a former actuary. So he kind of gets it a little bit on the insurance piece. And we sit down and we evaluate each product that's about to be rolled out. And then what we do, which Scott also talked about is we require documentation for everything.

(35:07):
And this is where my actuarial team comes in, works with the data science team to make sure that everything that they would document as actuaries. We are also documenting. We don't use AI in the claims space right now. We're very careful about that. We have some products and data but haven't rolled it out. So mostly what our use cases are, underwriting a 24 hour chat bot, things like that. So having detailed documentation on the front end so that you can get proper sign off is fantastic. And then you do have to go into auditing your algorithms or your AI to make sure that they're doing what they're intended to do.

(35:55):
One of the interesting things that I'm sure you guys know about, but it was kind of a surprise to me, although it's intuitive, is that you can actually build an AI to audit your other ai. And so what we have, we have layers of systems and we have essentially more narrow and closed AI that then audits our other AI and will actually spit out the documentation for us. So when you're using it in underwriting, we spot check by having this log of how our AI is actually operating, knowing what if there's a decline, for example, knowing why the decline happened. And it's very cool that the AI can essentially, it's a second AI that layers on top of it, and then you need a process for remediation in the event that something's going awry. And I think this is the most challenging aspect of governance for AI models, is trying to figure out, well, how do we fix this thing without scrapping it entirely? Especially when you are starting to go into gen AI and large language models, trying to figure out how do you have it unlearn certain things is a little bit of a challenge.

Evan Daniels (37:14):
Yeah, I think those are great examples of the different pillars that you have to have as part of a viable governance plan. A couple things you said too, to just riff on them when it comes to regulators, at least if it's not documented, it didn't happen to speak to the importance of that part of a good thing.

Rachel Jrade-Rice (37:32):
Sometimes it's a good thing

Evan Daniels (37:33):
Sometimes that's okay. That's right. But if you're trying to explain yourself, it can be a negative thing. It can be something that you have to overcome. I think we're going to have some time for questions, so we're happy to take questions here in a moment. But I wanted to ask you both. We've talked a lot about crystal balls and how tough that is, but to gaze into our crystal ball a little bit about where all of this is headed, we've, like I said, we've kind of been a wet blanket in terms of talking about risk and mitigating that, but there are some really good things happening too when it comes to how AI is being deployed, not just in your companies, but generally I think in particular of fraud prevention, although that cuts both ways as we've seen in other sessions here, AI can be used to help commit fraud, but it's also helping companies I think detect it a lot more easily. So not necessarily just focused on regulation, but where do you see this headed and maybe where would you like to see it head?

Rachel Jrade-Rice (38:40):
So I think you're going to start seeing more generative AI being implemented within companies. My chief product officer came to me with a brilliant idea the other day, which I obviously can't talk about, but he was expecting me to crap all over it and I said, no, this is super smart. It has to do with risk mitigation. So I think we're going to start seeing gen AI be used all across the insurance spectrum claims handling. We have the chat bots again right now for governance purposes, since regulators are super uncomfortable, there should also always be a layer of human check on these things. But I do think you're going to see it in potential value added services to the customer and in other areas. So we're going to see more large language models involved in our insurance,

Scott Fischer (39:38):
And we certainly we're using Gen AI today. I mean, it works extremely well for claims, for first notice of loss, those sorts of things. It works extremely well for customer experience. I want to change this online policy, that online policy, where do I go? It works really, really well. I think what I'm hopeful that we'll see is not just because it works well for lemonade, but for everyone else, is the ability to use AI to better differentiate risk, to better price risk. Because then, I mean, it will allow insurance to be generally cheaper for most people. Some people may be more expensive because they're a greater risk, but good for us, I think that's where you will see it. The difficulty, the future is in a rub between the existing rules, the existing laws, the existing expectations and where we want to go and getting regulators over that hump.

(40:44):
Getting the general public over that hump I think is probably the biggest challenge. But I think we will see it. I think people will recognize that there's a benefit to it. I mean, one sort of example is on the fraud detection. So I think we probably, if you ask every, I don't need to make this interactive, I find that annoying when people do it. But if you ask the question, well, is it a clear red flag for fraud? If the person files a claim five days after buying the policy, I think everyone would say, yeah, sure, that's clearly a red flag for fraud. I think the answer is what AI can show us is no, not really actually. And actually that's probably more unfair. You can use AI to figure out, well, who files claims more frequently after a policy? Is it people in this community or that community?

(41:40):
You can discover that that's not something you're going to discover if you just have a checklist of fraud. Even if you drew out that information and had human beings looking at it, you could probably get to the point in which you decide, well, no, actually, if it's somebody who's for renters, we sell a lot of renters, a lot of kids, oh, we're going to have a party in my apartment or my new house on my apartment, and someone says, you should get insurance. So they buy insurance and then they have the party and then they have a claim. It was a bad risk across the tape, but it wasn't fraud. And AI is a tool that's going to be able to differentiate that from the person who buying a policy, making a claim five days later because they already had an incident. They're outright, outright fraud.

(42:35):
So I think that's what I think we will, it exists already today. What we'll see, I think is a greater acceptance of that as regulators get more confident with that. Yeah, I think on the regulatory and legal side, probably move more towards what I hope is two things. One is a recognition by the regulators. The tools, the rules that we already have in place are if they're not adequate, they're the right methodology. When I say the right methodology is today, you can't unfairly discriminate. You can't use protected classifications, and frankly, you can't use proxies for protecting classes regardless of whether AI is involved, whether , and so whether AI is used or not doesn't really make a difference. I'm hopeful that they will come around to that. And what we'll end up seeing is I think what you've probably seen if you follow space, my former colleagues at DFS came out with, even though NAIC did its little model, sorry I didn't get to belittle, it did its model bulletin.

(43:50):
DFS always has to do something different and came out with something about how they're going to look at it. And they were, while I think there's a lot not to in their circular literature proposed letter two things to like, one is they limited it, so it's pricing and underwriting. It's not everything under the sun. It's not claims, it's not fraud detection, it's not marketing. They've limited it to something that actually would have an effect on human beings. And then they went to the next step. And New Jersey is actually followed along loop as well. But what Colorado didn't do is say, we're going to apply a disparate analysis impact. And for all the non-lawyers here that just said, that means you can have, even if you don't intend to discriminate, you can discriminate if you're unfairly or you're impacting one group of people more than the other, but not every time that you do that is it it problematic and then it's a shifting, so you did it.

(44:52):
Now the burden shifts back to the company to say, well, why did you do it? Did you do it in an appropriate way? Did you do it in a least discriminatory way? And so you then get the ability to discriminate, which as we all know is what insurance is all about. You get to discriminate, but you get to discriminate a certain way and with certain limits. I think that's where my hope is. My expectation is eventually we'll sort of get there. It'll be a long road before we get there and a lot of NH and the teeth before we get there. But I think that's probably where we will get. And I think on the really good side, I completely agree with Rachel, which is it's going to make not just the experience better, it's going to make it cheaper. It absolutely is going to make it cheaper.

(45:37):
And I know that kind of scary. It means the most expensive part of insurance is the people. So I know that's kind of scary, but I think there's a lot of still opportunity for people who are there today to still continue to work in the area, but it's going to make it a lot cheaper because you can do things that used to take human beings. It's like anything. When the first computers came out, yeah, there were going to be people who were worried, but I think you will see a lot of that. We'll make it cheaper. I think frankly, our goal is to actually use it to make it more fair, but that's harder to assess than cheaper. Cheaper is very easy to assess. I think that's probably where we get to and I'm great.

Evan Daniels (46:18):
Well, I'll add two things in closing first, which is that I think we've seen some discussion around distinguishing AI as a tool versus a thing unto itself.

(46:32):
And I think I am guessing that as this discussion develops as understanding of what AI is and how it's used develops, we're going to have a recognition at some point. I don't know if it'll be an aha moment, but there will be I think some blending of some understanding that how regulation touches AI probably is going to be related to things that we already have. So the big thing though in that is that data privacy probably is going to be, Rachel mentioned not using PII in how you're deploying AI. That may be even more important in large part, just in terms of a practical effect than specific AI rules themselves because it's a lot easier to plug in someone's PII and make it public. That is going to be a much more likely scenario than creating an algorithm that has biased underwriting or something like that.

(47:31):
So I think watch for that, for that integration. Data privacy is a huge issue. If you're confused by all the different rules about AI in the states, I think data privacy is probably even worse right now when it comes to distinguishing who has what standard depending on the jurisdiction. And then the second point I make is we've been talking in the context of carriers, but this discussion about governance applies to those of you who are what I would call insurance adjacent as well. If you're a third party service provider that's working with insurers, this governance discussion absolutely applies to you. The regulators are going to absolutely want to know how you do all these same things that Rachel and Scott have been talking about in their insurance companies. So it's going to be important for your companies to also have that same story to tell.

(48:27):
I suspect if you work with Scott and Rachel, they will eventually say, if you want to work with us, you need to be able to explain to me how all this works in case I have to go explain it to my regulator. And very likely there will be rules around that at some point. Much there is in the banking space. I think it's a logical extension of some of those areas where if an insurance carrier is using AI via a third party, the regulators are going to want to be able to understand that. So I think those are the two things I would close beyond just again, thanking you for coming. We've got a few minutes left, so if there are any questions, we're happy to take them. And thanks again for coming. Appreciate it very much. Any questions? Yes, sir.

Audience Member 1 (49:20):
Since the regulation is coming and the speed of innovation is miles ahead, are there any kind of just baseline, you can get your governance prepped, I think inventorying risk rating, having your own methodology for risk rating. Is there stuff that people can start thinking about taking to organizations to check in and say at a baseline you can get this going so that when the regulation drops, you're kind of already off the blocks from your experience? Anything that you could offer in that space?

Evan Daniels (49:53):
Yeah. Well, I think Rachel actually gave probably the best description of the kinds of things companies should be doing. I should let you talk about that. Yeah.

Rachel Jrade-Rice (50:04):
My number one thing that I would say to do immediately is map out where you're using your ai, because you would be surprised how companies don't have an aggregated list of where it's deployed in every piece of the operations. So do that, and then you go back and you check to see the documentation around the deployment of each one of these and make sure that all of them are very well documented. And I think that's a good starting place. So that doesn't entail going back and assessing whether things are operating the way that they should, but at least you'll have the basis of a story that you can spin in the event that there's any type of regulatory inquiry.

Scott Fischer (50:54):
Making lists is incredibly tiresome and tedious, but super important. Nothing from our former lives, nothing destroys credibility more than not being able to say, oh, we don't know. What did we do? And like I said, and I was kidding, but it is, I mean, when you're a data scientist or when you're a tech forward company, that stuff's not, I mean, it's not even just them. Nobody likes doing homework. Nobody likes showing their homework, but getting an inventory we use, we've imported, we've imported from real tech world or total tech world, the concept of model cards. So all of our models we have cards for, and at least if they're not perfect, they show something and they show iterations. I can't tell you. So as Rachel pointed out, I came to Lemonade when this stuff had already happened, and then we get questions and it's like, well, no, no, but that was, oh no, we're not doing that anymore.

(51:57):
That's an old version. Why do you know? Oh, because somebody did that. Then it's like, well, where's that guy? Well, he's gone. I mean, you're killing me here. So I think it's the tedious stuff that absolutely, if you do nothing else, being able to say, here's what we have, here's what it's there for in charge of it, all that sort of stuff. That will go a really long way to making regulators feel better. And even in the absence of regulators feeling better, if God forbid the guy or the woman gets hit by a bus, you don't have to try to figure out what they he or she did in his or her house.

Rachel Jrade-Rice (52:41):
And regulators are pretty good too, if you fall on your sword, if you find something that's not appropriate. Alright. Yeah, with the exception of New York and California and maybe New Jersey. So I mean, being able to say to them like, yeah, we f this up. We're really sorry. We're going to go back and fix it. But you have to know what, you have to be able to do that.

Evan Daniels (53:04):
And I would just also add that I think that should, my advice would always be that should permeate every layer of the organization. So AI deployment should be part of a strategic plan, or if not in the strategic plan, a supplement to the strategic plan that informs it. Decision trees should be clear and apparent. All these things that they're talking about, I think you want to be able to show that it's something your organization all the way to the top has thought about in a meaningful way.

Audience Member 2 (53:43):
How do you think about your internal parties, perhaps your own data science team or third party companies that you might use where they might create their own models, but they are also relying on other third party models such as OpenAI, for which they cannot really provide you documentation and explanation. So how do you think around, think about handling all that?

Rachel Jrade-Rice (54:10):
I just want to be sure I understand the question. Is it when third parties are using other companies, AI?

Audience Member 2 (54:17):
Yes. That and also if your internal team decides to use it. For example, I know in Lemonade you say you are already using Gen AI. Dunno if you're using your own large language models or trying to use say, open AI or some other, how do you document all that?

Scott Fischer (54:33):
Yeah, how do we do it? It is rough, but it's the same sort of thing. I mean, we're not going to get the information directly from the third party, but in the, so take Verisk, we all know Verisk, right? So yeah, we all use, everyone here uses Verisk, I'm sure, or LexisNexis. We don't have all their information. But within our model card, within our governance, it's going to be specified. Here's the data that we're getting from there and here's how it's going to get used and here, how it's going to get deployed. The real difficult part, I think maybe this is a little bit of what you're getting to, leading aside the who owns what and who has rights and confidentiality, that's really hard to deal with. But the thing that will come up and that I don't have a good answer for, and I'm struggling with and everything else is this expectation of verification or of validation.

(55:36):
I say this to the data center, well, if I needed to validate it, I'd need to have the information. If I had the information, I would need to go to Verisk. So what is it that you're asking me to do? And that's I think, the biggest challenge. I think that's where make a plug for Evan outside experts can be helpful in trying to say, well, there are things that you can do to validate that isn't necessarily on the basis of what the data says, but how does Verisk arrive at this information? And that's the kind of thing that you can negotiate, I guess with Avera or Alexis Connect or something like that. Where are they getting it from so that we can then use it. But it is, the integration part certainly is challenging because I would say the knee jerk, the normal reaction to anybody working anywhere would be, well, I got it in this guy, and yeah, I'm relying on them and that's why I have indemnification agreements, provisions in my agreement. But that's a real sore spot. But I think it's really the ability to go through and have a plan or have at least some sort of way in which you're validating the information that will go a long way to allowing you to then to point yourself and having a level of comfort that when somebody comes and second guesses, you thought it through.

Rachel Jrade-Rice (57:03):
Yeah. Sorry, to get into the very nitty gritty of the process too, this has to do with how governance across the company affects how you govern AI. So your procurement system is really important here. When you are working with a new vendor, they come in to either your finance team or your legal team. We then vet them through an information security questionnaire, and then we have a DPA on all of our legal agreements where there's going to be data being used. And in fact, interestingly, OpenAI right now is refusing to negotiate DPAs with the companies that are using their enterprise software, which is part of the reason, and I don't blame them, I wouldn't want to have to indemnify for those types of uses right now either. But that's also the reason why it's really important that we have this employee use policy too. But we do ask questions of our vendors to ensure that they're operating as we would want them to and subject to the regulations that we are also subject to.

Evan Daniels (58:12):
This is a very active topic in the banking industry right now. The federal regulators are very tuned into understanding how financial institutions are utilizing data from third parties. So I think on the insurance side, the insurance regulators might get to that same level of interest. I know there is, I don't know if it's a task force or a working group. One of those, there's a distinction there that I won't bore you with. I'm not sure I understand it. But in any event, there is a group at the NAIC, at the commissioner level actively looking into this idea of making sure regulators have access to third party information, say all that to say if you're a third party wanting to work with insurance carriers, this could be a way to distinguish yourself by thinking this through and being able to present that to carriers as a way to make their life easier. So I would say carriers are smart to be thinking about it. They might have rules to deal with in the near future, so should third parties, because for that same reason

Rachel Jrade-Rice (59:19):
I know,

Audience Member 3 (59:21):
Mention to our questionnaire.

Evan Daniels (59:31):
Yeah Very smart.

(59:33):
Well, we are out of time, so thank you for your questions and your attention. We will be around to chat, so please come chat with us and thanks again.