New York State's new
"If you're using third-party systems, you cannot punt the accountability to the third party," said Karthik Ramakrishnan, co-founder and CEO of Armilla, an AI model and verification technology company that serves the insurance, financial services, healthcare, retail and other industries. "The insurer is still accountable for the end outcomes and that's what the circular really tries to emphasize."
New York's guidance came from its Department of Financial Services, which regulates insurance, in the form of a circular related to
What can insurers do to ensure they are compliant with the circular? Ramakrishnan recommends insurers set a governance policy for how they collect data, develop models and train models. Secondly, insurers should examine how their operational production works and their intentions for the use of models. "Where are the areas where we are okay to use AI and where we won't?" he said.
This, in turn, requires understanding what thresholds an insurer will set and how it trains its data scientists and makes them accountable, according to Ramakrishnan. Lastly, insurers must monitor the processes set for governance, he added.
There are aspects of using AI where insurers should go beyond just what is mentioned in the New York regulatory guidance, according to Ramakrishnan. AI models should be tested for bias and for how changing variables in the models affect outcomes, he said.
Insurers also ought to look at how AI models perform. "Can we explain the model well, do we understand how it makes these decisions?" Ramakrishnan asked. "Which features are important in driving decisions and robustness? Does the model do well on unseen data?"
Depending on their accuracy levels, AI models also can be trained to take data and situations they have not seen before, according to Ramakrishnan. The aim is to avoid "data drift" and "concept drift," he explained. "This is a very specific concept to machine learning, where if it sees too much data that's outside of its realm, then it may start making more and more erroneous decisions and outcomes," he said. "You should know how your model is behaving in production."
New York is not the first U.S. state to consider regulating the use of AI in insurance, but one of the first to issue policy or rules on the subject. Last year, Colorado's insurance regulator