In its “
Of course, insurers and insurtechs are already using AI across lines and in areas like claims processing and distribution. However, there are potential challenges ahead as the technology becomes increasingly more involved in modeling and pricing. A major concern of consumers and regulators are potential structural biases being built into AI. Can machines be adjusted to reflect equitable practices, or are they doomed to internalize – and build on – the conscious or unconscious biases of their programmers?
Insurers have spent the past several years ramping up diversity and equity initiatives not just within their workplaces, but in their products and services as well.
For example, the American Property Casualty Insurance Association says in its “
Those statements are echoed throughout the insurance industry, and have left digital leaders looking for a middle ground as they look to implement tech like AI into their workflows.
“The vast amounts of data and ever expanding computing power is accelerating the use of AI within the insurance industry. And while this tool can greatly aid businesses across the sector, it also raises new challenges to be addressed, including consumer privacy and safeguards to protect against unintended discrimination that may be built into algorithms,” Jon Godfread, North Dakota Insurance Commissioner and Chair of NAIC’s AI Working Group, said in a statement announcing
In conversations with Digital Insurance, experts across insurtech reflected the boundaries of the debates around corporate governance and ethics, and how those interact with AI initiatives. Some say that “bias” is an inaccurate term to describe the problem. AI engines for insurance underwriting have to make value judgements in order to provide accurate pricing. What they can’t do is make those judgments based on immutable characteristics like race, says Eric Sibony, chief product and science officer and co-founder of Shift Technology, an insurtech that built an AI fraud detection system.
“We need the algorithms to be biased, otherwise it would mean everything is the same. The algorithm is discriminating, [which] is a form of bias. What we don’t want is a bias related to personal characteristics,” Sibony says.
Anthony Habayeb, CEO and co-founder of Monitaur, an AI governance and software platform, says that while “intelligence” is in the name, AI is in danger of being overly anthropomorphized – that is, being treated as a conscious human itself with no recourse to change. It’s not too late to alter the trajectory of its implementation, he says.
“Bias is a human problem, the context is we need to recognize that AI is another form of a system and a system is a product of people, process and tech,” says Habayeb. “AI cannot be the problem. The idea of ethical principles in AI should be an extension of [corporate ethics].”
Amaresh Tripathy, senior vice president and global supply analytics leader at Genpact, IT and business services firm, says there is a philosophical layer around establishing guidelines and having conversations about ethics.
“There are a few places where those conversations are being forced. Banking for instance, you see a lot of it happening because of regulations,” says Tripathy, adding that healthcare and other financial services industries are also having ethical conversations. “Beyond that, in other industries, they’re at a level where people are learning about it rather than doing it.”
Tripathy suggests such questions as: What is fairness? What is equity? What is the responsibility within that? What is the role of the organization or company in society?
“I think it goes back to the values of companies and it’s a reflection on the vision and mission statement,” says Tripathy. “Who is the owner of ethical AI in an organization? Raise that question.”
There is a point where diversity and equity concerns in AI development coincide with similar efforts being made in other parts of the insurance industry. At a time when insurers are looking to recruit the next generation of digital staff, Habayeb says that having a diverse group of programmers is going to be essential in avoiding unconscious biases creeping into algorithms.
“Tech and software isn’t the most diverse ecosystem,” Habayeb adds. “I’m a white male that is building a software company, there is a privilege… Are we walking the walk? It is not easy, I don’t always know if I’m doing as well as I can but I want to build a company that has a positive impact and we’re honest about the values.”
How it’s working
Lemonade, an AI-focused insurtech, has done just that. The company has engaged Tulsee Doshi, head of product for responsible AI at Google, as its
Doshi tells Digital Insurance in an email that it is most critical for insurtechs to understand the history and social context of insurance as it connects to systemic discrimination.
“Insurance has been a critical part of economic infrastructure for centuries, and it is based on other layers of critical infrastructure–housing, transportation, etc. that have historically worked differently for and marginalized certain communities,” Doshi said. “Building this understanding is critical to considering and addressing it when building and designing products.”
Doshi said that she partnered with Lemonade because the company is being intentional about responsible AI and that there are conversations about when to use AI, how to measure and improve fairness and how to ensure humans are included. Those conversations come in context of a company that was involved in a class action lawsuit for allegedly violating biometric privacy laws in Illinois, after the company
The insurtech also recently released a podcast,
“Some feel that more data will only exacerbate a problem; however, in insurance I believe the opposite is true,” Schreiber said, adding that the company has been advocating for the use of Uniform Loss Ratio – where instead of pooling premiums, big data and AI are used to charge a person an individualized rate based on their specific risk.
Schreiber suggests that the first step to these conversations is for insurers to establish company values.
“Data can help immensely speed up processes, but in certain instances should still be viewed through a lens of human values that a company is aligned on,” he said.
In addition, Munich Re recently announced CertAI, a new AI validation service that provides proof of an AI systems trustworthiness.
Dr. Oliver Maghun, Munich Re senior project manager of artificial intelligence and co-founder of CertAI, said that at CertAI they assess trustworthy AI along six dimensions. Robustness, transparency, security and safety, fairness, autonomy and control and privacy.
“A trustworthy AI system is developed, deployed, operated and monitored in a way that in any time the relevant trustworthy dimensions are fulfilled,” Maghun said.
Privacy and cybersecurity concerns are both potential challenges to further AI implementation within the industry, but insurers are moving forward with the technology with those concerns in mind.
“There is no replacement for humans in the loop,” Lemonade’s Doshi concludes. “Evaluating fairness in insurance is particularly complex because insurance is in the business of predicting risk–that risk may or may not come to bear, and so there isn’t common ground truth. As a result, it is important to evaluate algorithms in insurance in multiple different ways, across time.”