The ethics of AI in insurance

Past event date: July 27, 2022 12:00 p.m. ET / 9:00 a.m. PT Available on-demand 45 Minutes
REGISTER NOW

Ethical considerations around the use of data for insurance purposes has been a constant thread through the digital age. From questions around price optimization and credit scoring to new debates around data crawling, insurers are faced with decisions to leverage data sources for a variety of purposes and risk consumer and regulatory blowback. When it comes to using data to feed artificial intelligence, the debates could reach a fever pitch. If AI is going to be empowered to make decisions at some future state, are insurers making sure that those machines are doing so without being inculcated with unconscious biases that have plagued the industry? In this panel discussion, Digital Insurance editor in chief Nathan Golia and insurance leaders will talk about how insurers can meet this daunting challenge as the pressure to digitally transform processes using next-generation technology grows.

Transcription:
Transcripts are generated using a combination of speech recognition software and human transcribers, and may contain errors. Please check the corresponding audio for the authoritative record.

Nathan Golia (00:17):

We have three great panelists with us today. I wanted to give them a chance to introduce themselves in just a second, but first I wanted to level set on our topic ethics of AI and insurance. This is a topic that we're going to be following quite a bit at digital insurance you've probably already seen, we did some coverage on it, we did some reporting on it that came out at the end of June and we also had a session on it at Dig in June in New Orleans. Now the reason is that as we report on, and we'll talk about here, some of the use cases for AI across insurance and how companies are planning to use this technology in the future. We also are starting to hear more questions coming in from people about what happens if there's, for lack of a better term, somewhat of a backlash to use of AI and insurance from various groups associated with the insurance value chain. We're going to talk about setting up governance to mitigate those issues in this and in a lot of other reporting and probably more sessions going forward. With that, I'd like to turn it over to our panelists, Oliver Maguhn, Scott Harrison, and Trevor Burgess. Let's start with Oliver. Could you just tell us, give us your title and your relationship to AI strategy?

Oliver Maguhn (01:41):

Yeah, thanks Nate. Thank you for having me here. So my name is Oliver. I worked for Munich gre. I have spent my whole career in the insurance industry. I started at a, and then I've worked for Munich RE for about 15 years now, since three years I've ticked into the topic of AI and especially business models around ai. And so we were starting thinking about what are the risks that are associated with this new technology? And of course, ethical question, fairness is one important topic that we are going to touch today. So my background is in business engineering and I have a PhD in economics. I'm not a tech guy. I'm rather try to build something around that we can use AI in a trustworthy manner.

Nathan Golia (02:37):

Great, thank you. Trevor, could you just tell us about yourself and your company?

 Trevor Burgess (02:44):

Sure. I'm Trevor Burgess. I'm the CEO of Neptune Flood. We're the largest private flood insurance company in the United States with around 125,000 clients. We use artificial intelligence to do the traditional role of underwriting, and my last business was a bank, which was a highly, highly regulated organization and we use some artificial intelligence there as well. And so I have a long history of using technology and financial services and look forward to the discussion today.

Nathan Golia (03:19):

Thank you. And Scott.

Scott Harrison (03:22):

Hi, welcome. And again, I appreciate also the opportunity to be with y'all today. I'm a lawyer by training spent probably 30 years almost working in and around regulation in the insurance industry. Was a former senior regulator in the New York Insurance Department. Prior to that, the Delaware department back in the mid nineties. And really where I've spent much of my career working with companies, advising companies as a council or their as a consultant, really is on how do we modernize regulatory frameworks in the regulatory systems to keep pace with changes, whether it's in technology changes in demographics. And so back in the nineties when catastrophe modeling was brand new technology, I was deeply involved both in my work at the New York Insurance Department at the NEIC, on what do regulators do with that information. And so the development of technology and the application of technological advances in the insurance industry has been something I focused on for a long time.

(04:32):

And really my particular area of focus is what do we need to do to make sure that the regulatory model of regulatory requirements and standards make sense? Meaning that they continue to protect solvency, they protect consumers, and they provide wide swim lanes for innovation in the industry and innovation insurance that ultimately benefits consumers. So again, the discussions around AI are deeply interesting to me. Recently founded an organization called the American Insurance Tech Council along with some others. And the goal of that organization is to represent its sponsoring companies on efforts here in the United States to modernize regulatory system so that we have a regulatory system that makes sense, that benefits consumers and continues to grow.

Nathan Golia (05:25):

Great. And if you haven't already guessed, yes, I invited Scott on to sort of represent the regulatory voice here. That is a big question that comes along with anything around ethical consideration and deployment of new technology. So thank you, Scott for joining us today.

Scott Harrison (05:37):

Thank you.

Nathan Golia (05:38):

I want to start with our first question. We're going to talk about these sort of governance, these governance issues around it, but I think we should start by talking about where AI is currently being deployed across insurance. I think a good place to start this with you, Trevor, because you said you're using it already at Neptune Flood. Can you start, we're going to go around and just see what are some of the use cases that you're seeing in your company where you've had success or where that you've seen around the industry in anyone's case. But yeah, Trevor, can you talk a little about how Neptune's using AI currently?

 Trevor Burgess (06:09):

Sure. So maybe I'll talk a little bit about our journey in artificial intelligence. And the framework that I like to operate under is first an academic one, which is to ask the question is what we're doing good for society or good for humans? Is this something that is going to actually end up benefiting society? But then you have to think about how do you operationalize that because that's a very philosophical high thinking idea and looking at frameworks, looking at actually how you deploy that within your organization. What does auditing and monitoring look like? How do you educate your engineers to make sure that what you're building actually meets that lofty goal of actually being positive for society as a whole? So we started out by using artificial intelligence, which is very broad, all encompassing term to really do math that humans can't do. And within the engine itself that is making traditional underwriting decisions such as should we provide insurance, yes or no?

(07:21):

What price we should charge? We are using that very complex math that just humans are not able to do. We do not have our engine doing live learning, which is sometimes referred to as machine learning. We do that offline. So we'll do things like look at our claims experience, use machine learning to go through that and see what is it about those claims, what was it about those properties or that piece of land, how can that then inform the algorithms that we're running live? At some point you could see the ability when frameworks are advanced enough to have the system do live machine learning, but it's not something that I yet feel comfortable with. Even though at Neptune we've taken a very principled stance, which is that we know nothing about the consumer when we're doing that risk selection and the pricing. We do not ask for their name.

(08:22):

We start with only the address. And it was very, very deliberate and part of that framework that we're working with under to make sure that we're not inadvertently introducing bias. There's that famous story about Amazon who spent years and years and years trying to build an artificial intelligence system on hiring and they could not get it to not systematically discriminate against women. So they scrapped it. Now I hope they're back at it and working on it, but we have to be very, very, very careful here. So we've taken some bright lines, no live learning in the system and we make sure that humans are involved in the updating to the algorithms that is made. And we know nothing about the client actually who the client is when we're making these decisions upfront. So that's some of how we've been thinking about it and using it at Neptune so far.

Nathan Golia (09:16):

Great. And I wrote down a couple of notes we'll get back to when we're talking about our real governance questions, but Oliver, I just wonder if you could jump in and tell us a little bit how munichre you're seeing AI be deployed for insurance purposes.

Oliver Maguhn (09:29):

If I look from a reinsurance perspective to AI, to the insurance industry, I think mainly three areas in which AI is currently used in the insurance industry, the first area it's AI will become a competitive edge. So it's first of all, it's an efficient C game. It's to make, to automate processes, to make them faster, leaner, efficient, and so on. This is the first thing. The second thing is to improve the customer experience with the insurer, that you just need to send a picture to your insurer and your claims get settled. For example, for example, if you have chat points, you do not have to wait in endless queues to reach someone to get an information that you're asking for. This is the customer experience. Then the claims management is something where we are also active in as a reinsurer, as Munich have. For example, we have a project running, which is called remote sensing.

(10:41):

And as you know, we are very active in the natural catastrophe space. And after a natural catastrophe, you can typically not access the area. You do not know what are your losses and where to send the loss adjusters first. And by analyzing images taking out of aircraft, we are able to do a fast, quick, and reliable analysis and can already pay out the first claim also to improve the customer experience here and also to manage the natural catastrophe by sending a loss adjuster to the areas that were mostly affected by something like this. And the last thing is maybe also on a risk assessment side. For example, there, if you can only take a picture of your house and your house get insured, it's much easier, much more convenient than filling out five pages of forms and then you maybe get a quote a long time later.

(11:55):

So of course there are ethical issues when it comes to the use of AI in insurance, but I think it's more to identify them, to make them transparent and to be sure if we use ai, I think human beings tend to be, then it must be 100%. But that is the wrong benchmark that we set because human beings are also not falls free. It's not about falls free, it's about outperforming what is the status quo, which is typically human beings. And I think there it's still, especially when it comes to ethical question, there's still a lot of room for improvement in how this is considered in the society.

Nathan Golia (12:41):

Right. We're going to be answering all that point I think throughout the next couple panel around the horn here. But I did want to give, so Scott, if you wanted to close out if there was any sort of use case that you had seen around that wasn't mentioned that you're interested in mentioning otherwise, I'm going to give you the next question first, but I wanted to give you a chance to weigh in. Oh, we lost your voice, Scott.

Scott Harrison (13:09):

Sorry about that. I put my phone on mute now I'm happy to take your question. And I think the use case discussion was really interesting, but

Nathan Golia (13:19):

Right, so I think what we've heard here is that there's a number of opportunities to use AI across the insurance value chain throughout the insurance processes. And there's obviously reasons to explore them, but I think as both Trevor and Oliver alluded to, we're going to be talking about there are some questions and there's some governance, and even Trevor sort of almost seems to embody personally, here's how we're setting these lines. Your example about Amazon was really interesting, something you wanted to avoid. So the first major concern that we hear a lot is that from the consumer side is that algorithms might incorporate unconscious biases of their programmers. And I was wondering if you talked about the work you did with some early cap modeling work in the nineties, and I know that every time there's a new data model or in the past insurance when there's been new data models or new analytic things, there's been this question of like, well, is this accidentally a proxy for some immutable characteristic that we do not want to underwrite on? I was wondering if you could sort of talk a little bit about what can insurance industry learn from the times they've gone through these in the past to make sure that these biases against immutable characteristics don't creep into AI deployments?

Scott Harrison (14:36):

Yeah, I think that's a great question and I think maybe it's helpful to put into some context the role of the regulator here. And also to put in some context where the role of AI as a technology or a tool fits within the evolution of the insurance business. Generally, the role of the regulators I mentioned before is to protect consumers, protect insolvencies, and facilitate the growth and development of a healthy insurance industry. That's what benefits our society, that's what benefits our economy. And really when you drill down into, well, what does that mean in a practical sense, it's really not the job of the regulators and regulators in the US have done, I think for a long time, a very good job of this. I'm not telling companies how to run their business.

(15:30):

They have something to say about how you do certain things, but they really leave it up to the companies if there's a technology that a company thinks is going to be advantageous to their business, it's up to really up to the company that one point mainframes were brand new technology. And I'm sure that there was discussion about, oh, companies using these computers now what does that mean for consumer solvency and the benefit of the industry? But how regulators go about that really boils down a couple of really simple concepts that the first concept is transparency. Regulators need to have a clear line of sight into what companies are doing and how they're doing it, whether it's the use of AI or how their investments, how they do their underwriting, how they pay their claims, if they pay their claims on time, how the companies run their business.

(16:27):

So transparency is a core concept. Auditability is the next concept. Every company is subject to examination both on a financial basis as well as what's called market regulation basis, to make sure that companies are following the law and are in compliance with various state laws and regulations, how companies underwrite and how companies pay their claims and the technology and tools they've used to do that for decades. That's nothing new examiners have been looking at that for a long time. AI and machine learning might present some new challenges around and the ability of regulators to really get behind and understand how the tools are working. But the concept, the concepts are really the same and don't change. And so when companies think about, and the folks who are producing, developing the tools that they want to sell to their insurance company clients, they think about keep those concepts in mind that if your customer is the insurance company, their regulator is going to need to know how the company is employing ai.

(17:43):

There are laws in place around discrimination and bias. I mean, redlining was an issue a long time ago in this country. Regulators did a good job of addressing that. And so these concepts of bias and things like that, they're novel and the development of technology raises some interesting questions about some new use cases and some new risks. But the core concepts and regulatory principles haven't changed and don't change. So the challenge for regulators is how do we get coached up on the new technology? How do we develop rules of the road that adhere to ensure that basic principles of insurance, not just insurance regulation, but insurance are going to be adhered to so that we can understand how companies are using the technology. We know and we're confident that companies understand the technology they're using so that we can ensure that consumers are being protected and our laws are being complied with.

Nathan Golia (18:40):

I actually just wanted to mention something you said because you mentioned redlining as a particular issue. I think one of the, of course, really insidious impacts of redlining in the US is that it has had downstream effects even after the problem has been recognized and there's been legislative pushback against it. And that is sort of what I think people are concerned about when it comes to implementing things like AI is that they might be incorporating things from a system that was discriminatory without the correct amount of adaptation being made. And I think that's where we talked. So I think I would caution against saying that once something is recognized, there's still a time period after that. And I think that people don't want those to be extended because of the use of ai.

Scott Harrison (19:35):

Sure, sure. And I think the challenge in the example that Nathan, you mentioned with redlining is it does AI present opportunities to be more clever about it or to be more surreptitious about it or even find yourselves unintentionally doing it not knowing? And this is where I go back to, it's vital that companies understand the technology that they're using. They need to know what's in the black box and just like the regulators need to know as well. But that's a good point, and I think there's a lot of discussion at the NEIC about these topics right now. Bias, unintended bias intended bias and proxy discrimination are hot issues, and it's going to be interesting to see how this gets works out. But as I said, the basic concepts of insurance regulation and the objectives and responsibilities of regulators, they don't get altered by because the technology that we're using has changed or has become different. Those principles are the same.

Nathan Golia (20:41):

Right? I think just Trevor, maybe you could weigh in here because like I said, when you were explaining Neptune's evolution with AI and implementing it, you seem to embody understanding those things about proxy discrimination, stuff like that. How can we make sure that as we're implementing this that we're understanding that keeping the steps in front of us, we don't incorporate these things? If you could talk a little about implementing that on a organizational level.

 Trevor Burgess (21:09):

And I think Oliver brought up a really important point, which is that humans actually have a lot of inherent bias and the ability to introduce a lot of noise into insurance underwriting, for example, or risk selection. I spent a lot of time with human underwriters and agents leading up to building Neptune system, and I heard some very funny things while they were funny to me. I don't like the color of this house. This house doesn't look like they take good care of it. They seem to have a lot of kids over at this house. I don't like that they have a chain link fence. Okay, we do flood underwriting. Guess what? The hurricane cannot see any of those things. The storm surge is unable, does not matter. Not one of those things has any impact whatsoever on the loss that our reinsurers are going to suffer or not suffer.

(22:12):

But those were the types of things that were being discussed by humans in thinking about home and flood underwriting. And that's today, well, four and a half years ago. So humans inherently have a lot of biases built in. And so there are some ways that actually technology can just do from day one, a much, much better job at disconnecting from those sorts of historical challenges. So at Neptune, we started again with this framework, and I had the great benefit of coming from a highly regulated background, actually much more regulated than insurances where you have federal and state regulation of banking, very big focus on unconscious bias for a long time by the Federal Reserve and the FDIC. And so it was front of mind for myself and my team as we built our system to make sure that our humans who are designing the system really do understand what unintended consequences could be.

(23:22):

Now, to Scott's point about transparency and auditability, I will just raise a concern that we have to hire very, very, very expensive people who know what they're doing and building these systems. I hope that the key regulators are able to keep up with the, quite frankly, the cost of hiring people who really can, number one, understand the systems, have the power and ability to do the auditing because it's difficult. I would say four years ago, I could tell you very easily in our black box why we said yes or no. Four and a half years later, we're on version 130 of the system. I could still get you the answer, but it would take me a while. Four and a half years from now, I'm not sure. More and more detail, more and more data, more and more information, more and more complex algorithms, more and more computing power.

(24:25):

Some of the things that we're doing at Neptune, the computing power necessary did not exist four years ago. So this is changing so quickly that Moore's law doubling of transistors sort of thing. I hope that the regulators can keep up with it. And one of the reasons I wanted to participate in this panel today is that I think some ways those of us in the industry that are building these things need to have, one of the ways of operationalizing that ideal of this being good for society is to come up with our own frameworks that we're going to agree upon and sort of operate under that will help the regulators a lot.

Nathan Golia (25:08):

Right. I know I'm going to sort of seem like I'm hammering on this point, but I really want to make it

(25:14):

Because Trevor, when you talked about hearing the underwriters who you're consulting to help design your program, talk about the chain link fence and the color of the house, that could go one of two ways. An alternate universe person might've said, okay, so I guess that's an important data point for my flood underwriting system, but you were very careful to exclude those. And I think that that's what we need to talk about here. It's like I think that insurance companies can't be pulling these things off the shelf. They need to be really auditing them upfront to understand what the inputs are in order to avoid these issues down the road.

 Trevor Burgess (25:51):

And I think that's an important point. We built ours ourself, where others are just buying off the shelf systems and saying, oh, now we're just going to run it. Oliver,

Nathan Golia (26:01):

Why don't you jump in? I know you're curious to,

Oliver Maguhn (26:04):

I would strongly support what Trevor is saying. I think you have to work on different levels, and the first level is your own organization. And we at munichre, I'm coming from Europe. So in Europe, the discussion around trustworthy AI began back in 2080 when the European Commission established a high level expert group consulting the politicians and how to deal with AI within a European value based system. We took this up and we initiated our first AI principles at munichre, which we published in our corporate responsibility report in 2020, and we further developed this into a mandatory each e-learning tool for each and every developer who makes this hand dirty with AI algorithms. So somehow we established some processes in our company that makes sure that we provide AI with a certain policy, with a certain mindset and so on, and we now go even further because what we see is, I think these internal things are very, very good, but we take this a little bit further.

(27:32):

So we saw that there is a need out there that may be an independent third party looks at an AI system and provides a proof that this AI system complies with certain criteria. And I'm currently together with my team building a service, which is not insurance, it's AI certification service that provides proofs for trustworthy ai. And this is a new business model that we are currently deploying. What you said about transparency, this is also one of the risks I mentioned that we look at when we assess trustworthy ai, but transparency, to be very honest, it's technically the most complex dimension, and I just can support what Trevor just said. It's not yet possible really to tell or to give the cost for this and this decision, which makes it difficult for the regulator. But then we have to find some workarounds maybe to make sure that the ai, how it is used, how is it processed and deployed and maintained in a company that this works in a problem manner.

Nathan Golia (28:55):

I think that when you talked about er ai, which I believe is what that initiative is called, and trustworthy ai, it's not just about does it work to do the job, but I think part of it is does it not incorporate these risks that you could get sued for bias because you built it into your system. That's part of your cert process, right? You're looking at all those dimensions.

Oliver Maguhn (29:19):

Exactly. So what we did, we define trustworthy AI for ourselves because there is no official definition of what trustworthy AI definitions are somehow converging what we look, we look at robustness, we look at things like safety and security. We look at things like autonomy and control, fairness, privacy. And then one last dimension, transparency was the last one, of course. And we created a methodology that allows us end to end to assess each and every AI system use case specific to come up with a meaningful result, and to provide our customers with a quality seal with a stem that this AI complies with our homemade standard, which is inspired of course by the work that is done out there, by regulators, by standardization committees, by academia and so on. And this is a hell of work, but I think now we are quite advanced in this regard because there are many others out there who start looking at the processes, which is the first thing I would say. We have also the technical capabilities to look deeper into technology, but it's a complex thing and it's an ecosystem approach. So we don't do this alone. We do this together with this partners. Right.

Nathan Golia (30:54):

Scott, before we move on to the next question, you opened this one and then we've been a lot of different places. Just wanted if you had any closing thoughts on this initial

Scott Harrison (31:02):

Yeah, look, I think there's no question that from a technological standpoint, it's a really high hurdle for regulators, and I don't speak for the NE ic, but I'll just mention the trend in regulation. Over the past 20 years, as the business has become more complex across the board financially, investments, everything, products have become more complex. Regulators in the anti ie, have moved away from rules-based or prescriptive approaches to regulation to more risk-based approaches. We see that evaluation of insurance reserves for life insurance, the financial solvency regulations becoming more risk-based. And the third principle that I didn't mention in addition to auditability and transparency is accountability. And so the trend has been for regulators to set some principles, they did this in cybersecurity and really some guideposts for the regulator or for the companies rather to follow, but then understand that accountability ultimately rests with the company. They can't be there, they can't know everything that's going on in the company, and I expect that something similar will happen with respect to ai.

Nathan Golia (32:20):

That's actually a pretty good segue to our next question and it's very funny. I mean, I knew we were going to take 'em about, we've got a 10 minutes left, so just telling the audience there are any questions you want to ask them soon. Otherwise we might not get to them. I'll try and work one in before the end here. But I think that unifying the answers that all of you have given to this first question, it goes into our second ethical concern about ai, which has to do with the perception that it will replace humans in the insurance enterprise up and down the insurance enterprise. And I don't think that that is a perception that comes from some sort of things that are out there in the culture literature and stuff, but I don't think that insurance companies necessarily champion it the bit to get rid of people.

(33:11):

I think that there is a way to help retrain your workforce to understand how to work with ai. And I think that part of it is going to be with some of these concerns we talked or some of these ethical areas we talked about developing the governance around it is going to be where people come in. And I'm just wondering if anyone wants, I'll leave this open to anyone who wants to start talking about how you, if you're running an insurance company and you're saying, well, we're going to deploy this AI solution, and your employees start looking around wondering if it's going to replace them, how should you communicate to your employees that there's going, there's going to continue to be roles for them in a world where the AI is doing some of what Trevor talked about, the outset of some of the just harder math that is harder for people to do. Anyone want to take that one first? I know it's a little hard when we're on Zoom and live, but

 Trevor Burgess (34:04):

Well, I could just say that at Neptune, we're a system. The algorithms we've built are currently processing around 12 million quotes per year. So if I were to divide that by 250 working days and eight hours a day, I'd need 6,000 human underwriters working for Neptune to do the job that is being done by our engine. We've had the benefit of building the company from scratch, from employee zero and up. So we have the right number of employees doing the right roles today. No one's being replaced, but it's very purposefully built. So easier for Neptune to be able to answer that question. But some of this is, I really think about it as having humans doing human level activities. So things like retraining education, investment in core societal elements is really, really important If we're going to have the workforce of the future that we need, who are able to understand and work with these sorts of concepts. Again, there are not 6,000 property underwriters that exist in the United States that I could even hire,

Scott Harrison (35:24):

Right? From my perspective, I couldn't agree with that more. I look, the development of the industrial industrial economy has been that we've developed machines that have replaced what humans once did, but that didn't replace humans. It freed humans up to do things that were more productive and maybe increase the value of the company or the value of the industry, of the business. So I see the same principles applying here. It's not a zero sum game. If there are some functions that can be done more efficiently by a machine, then so be it. But that means that is freeing up your human capital in your companies to be deployed, doing things that derive and achieving greater value for your company and your customers. Your policyholders.

Nathan Golia (36:19):

Oliver, when you were going through your use cases, you went on a couple, you said efficiency, improved customer experience, claims management, risk assessment. I think that sometimes when people hear AI talked about, they hear efficiency and just stop there. How do you see AI supplementing human workers across insurance enterprises in those other areas?

Oliver Maguhn (36:41):

Yeah, I think if I take the view of a reinsurer here, we have a different business model compared to a primary insurance company. So we have perceive, we have a very hetero, heterogeneous, very human driven, customized business. So when it comes to process efficiency, process optimization, replacing human work, there are natural limits within the reinsurance industry. Of course, it's a different game for the primary insurance industry, but even there, I see also there are some in Germany, there are some quite radical digital banks who started fully digital at the beginning and they were realizing that of course, at a certain point there needs some human in the loop. There are some, you can't cover all cases just by a machine. What I do see is that we have a shift in profiles that we are going to hire. So in reinsurance in the seventies and eighties was the lawyers were quite on work. Then there was the business people with the financial background, then there was the math people and so on, very quantitative people. And now we have the decade of the data science, data analytics people. So there it's a little bit a shift, but it's rather, I would say it's a kind of natural journey that we are following.

Nathan Golia (38:27):

Yeah, I think there's a corollary there with customers who hear that there's automation or that their insurance company that may have worked with is rolling out something automated and they're concerned, well, I'm not going to have a person to appeal to anymore. It's going to all be decisions made by a machine. But I think that our panelists would agree here that insurance companies can't do that. I think that's what you sort of just said, Oliver, that like, no, we can't just get rid of everyone. It's not how it works. Especially if you're like Trevor processing so many more quotes, then you're going to need people to answer those questions.

 Trevor Burgess (39:03):

At Neptune. We actually,

Scott Harrison (39:06):

Go

 Trevor Burgess (39:07):

Ahead Scott.

Scott Harrison (39:07):

Go ahead, Trevor. Go ahead. No, I was going to say ultimately the market determines that if there's a company that wants to have no human involvement whatsoever and consumers reject that, well then that company will either change its business model or go away. And so the marketplace will, I think, set the right level for human interaction with increased efficiency by things that machines do.

 Trevor Burgess (39:36):

I think that one of the things that we've found at Neptune is that we need to hire smarter people to be the people that are interacting, given that they have to answer questions about how things like the algorithms work. So rather than a normal customer service organization that most insurance companies have, we have a team of people we call customer success representatives. They're all college educated. We do IQ testing. They have to have an IQ in the top 15% of the population. These are people that are licensed nationwide and have been able to pass the licensing test nationwide. And we find that it's only when we're able to meet those criteria that we're actually able to have this bridge between very complex artificial intelligence and machine learning and the ability to communicate with customers. So we have a set very high bar for the team members that we need to hire, but it does sort of scare me in the long term around how are we going to help educate consumers? How are we going to help educate our workforce to be able to deal with these complex systems of the future? Investment in education would be a good place to start.

Nathan Golia (40:53):

Two minutes left. Oliver, I just wanted to give you a last word on this topic if you wanted to weigh in. Otherwise we can close out here in a second.

Oliver Maguhn (40:59):

Really

Nathan Golia (40:59):

Great discussion though, everyone that has been

Oliver Maguhn (41:02):

Very brief answer what we do. We created our own training program for our quantitative people. So we have a lot of people with a background in math and physics and stuff like this who have the capabilities, the basic capabilities to do data science, data analytics, but who are not yet educated in this. And so we have trained up to now over 400 people. So we have this black belt courses and so on. We are quite lucky because we have this portfolio of people and now we have these 400 people across the globe doing really this new kind type of work with data, with AI and so on. And this was very helpful to be successful in this area for us.

Nathan Golia (41:58):

So I just wanted to close out here with our minute left and just say that I think everyone would agree and just looking at my questions that we didn't get to, there's a lot to talk about in this idea. As the insurance industry increasingly embraces AI and machines are making certain increasing decisions, there's going to be a sort of re-architecture of the insurance industry across that. We're going to be chronicling that here at digital insurance, and we're going to be reaching out and asking these questions about where are we now? We're going to be checking in along the way and say, well, what things have come up? What unknowns, unknowns have we discovered that we now have to know? This is just the first of many discussions I'm sure we'll have. I want to thank our three panelists, Oliver, Trevor and Scott, thank you for joining me today.

(42:44):

This was a really great discussion. Thank you to everyone who attended, and we'll be back next month talking about another topic we will probably not be able to cover all in 45 minutes, which is sustainability in insurance and the interaction between insurance companies and the environment and what the insurance company's role is in adapting to a changing environment. That's going to be a big one. We're working on the content for it now and it's a lot, so just we're covering these big issues all year at digital insurance and dig in.com. Thanks again to our panelists and thank you to all of our attendees.

Scott Harrison (43:18):

Great. Thank you. Appreciate it.

Speakers
  • Golia
    Nate Golia
    Editor-in-Chief
    Digital Insurance
    (Host)
  • Trevor Burgess
    President and CEO
    Neptune Flood
  • Oliver Maguhn
    Oliver Maguhn
    Munich Re Senior Project Manager, Artificial Intelligence and Co-founder and Lead
    CertAI
  • Scott Harrison
    Scott Harrison
    CEO and a Co-Founder
    American InsurTech Council (AITC)