Applying cybersecurity lessons learned for AI regulation

The U.S. Capitol building in Washington, D.C. Photographer: Stefani Reynolds/Bloomberg
The U.S. Capitol building in Washington, D.C.
Stefani Reynolds/Bloomberg

In February 2015, Anthem, Inc. disclosed that criminal hackers had breached the company’s servers and potentially stolen 37.5 million records containing confidential personal information. This was a catalyst moment for insurance regulators that ultimately resulted in the creation of the Insurance Data Security Model Law that is now being adopted by states across the U.S.

Similarly, in the summer of 2020, discussion by regulators regarding race and its role in the design and pricing of insurance became the catalyst to move forward on defining the regulatory expectations for using artificial intelligence in the insurance industry. As regulators and insurers work to understand the level of regulatory oversight that will be needed for AI innovation, we can find a path forward by looking at the work regulators accomplished regarding cybersecurity.

The making of a model law
Although state insurance regulators were already discussing the protection of consumers’ CPI, the Anthem breach placed a laser focus on data security. By April of that same year, the National Association of Insurance Commissioners had issued and adopted the “Principles for Effective Cybersecurity: Insurance Regulatory Guidance.” These principles included ideals such as establishing a minimum set of risk-based cybersecurity standards, establishing appropriate regulatory oversight, requiring incident response by insurers, requiring insurer accountability for third parties and service providers, incorporating risks in insurers’ enterprise risk management processes and identifying material risks for the insurers’ boards of directors.

Over the next 18 months, the NAIC would use these principles to draft a model law to establish standards for data security and standards for the investigation of and notification to the state insurance regulators of cybersecurity incidents. During this process, the drafters quickly recognized that insurers came in different shapes and sizes, utilized data differently, and had different levels of systems and expertise.

Such was also true of regulators. As cybersecurity is not inherently an insurance-only issue, an expert in insurance regulation was not inherently an expert in cybersecurity. Additionally, departments of insurance were not uniformly staffed with cyber experts. The new law needed to strike a balance to ensure appropriate regulatory oversight while adapting to limitations on both the insurance and regulator sides of the equation.

In October 2017, the NAIC adopted the Insurance Data Security Model Law, which tackles a highly technical domain of a similar level of complexity as what we will soon face with AI. In the law, I see five actionable areas of regulation:

  1. Proactive identification and mitigation of risks
  2. Ongoing monitoring and reporting of potential risks
  3. Accountability for third parties
  4. Compliance certification to regulators
  5. Transparency on significant events to regulators and opportunity to remediate

Additionally, the model law provides the insurance regulator the power to examine and investigate insurers while at the same time providing confidentiality protections for the information provided by insurers.
In adopting this model law, regulators successfully balanced maintaining significant regulatory oversight with placing the responsibility of compliance and notification of non-compliance on the insurers, who employ the necessary expertise in cybersecurity. The end result was a model law that allows regulators and insurers to prioritize the protection of consumers’ CPI through an appropriate allocation of resources and expertise.

A parallel path for AI
Just five years later, regulators once again find themselves addressing a quickly growing, high-impact technology that is not inherently an insurance-only issue: the use of AI. This brings the familiar challenge of insurers that are at different levels of engagement in AI, including different levels of systems and expertise. It also once again highlights the challenges for regulators with strong expertise in insurance regulation but not necessarily in the nuances and risks of AI. As regulators look at creating model regulation, they will once again need to strike that balance of ensuring appropriate regulatory oversight while recognizing limitations on both sides of the equation.

As they did with cybersecurity, regulators have adopted high-level guiding principles regarding AI. The NAIC's Principles on Artificial Intelligence are intended to establish guidelines for AI use and assist regulators in addressing regulatory oversight of insurance-specific AI applications. This time, though, the regulators also have the benefit of a potential roadmap to help navigate the development of a well-defined regulatory approach.

When overlaying the NAIC principles on the five regulatory areas I outlined above, a path forward quickly develops that emphasizes the importance of the key principles of accountability, compliance, transparency and safe, secure, fair and robust outputs.

1. Proactive identification and mitigation of risks
A company should have systems and resources in place to proactively comply with all applicable insurance laws and safeguard against AI outcomes that are either unfairly discriminatory or otherwise violate legal standards.

2. Ongoing monitoring and reporting of potential risks
A company must have a systematic and continuous risk management approach to AI. This includes a system to analyze AI outcomes, responses and other insurance-related inquiries. Risk management should include reporting to the board of directors any material risks and mitigation plans.

3. Accountability for third parties
A company must ensure that any third parties it engages to facilitate the business of insurance are also promoting, monitoring and upholding the principles.

4. Compliance certification to regulators
A company should annually certify to the applicable regulators the existence of proactive identification systems, mitigation, monitoring and reporting of risks, as well as compliance with legal requirements.

5. Transparency on significant events to regulators and opportunity to remediate
A company should have in place systems to record data supporting AI final outcomes and should be able to produce data to ensure a level of traceability. Any unintended consequence should be remediated when identified.

Actionable areas for insurance regulation
Jillian Froment

Also, as was done in the Data Security Model Law, a similar AI model law can provide the insurance regulator the power to examine and investigate insurers while at the same time providing confidentiality protections for insurers’ proprietary algorithms.

While at times, regulating and managing risks of AI feels overwhelming and unknown, these are not completely uncharted waters. By adopting this model framework for AI, both regulators and insurers could embrace a comprehensive approach that would allow consumers to benefit from innovation in AI while establishing important consumer protections and trust.

For reprint and licensing requests for this article, click here.
Artificial intelligence State regulators Regulation and compliance Law and regulation Cyber security
MORE FROM DIGITAL INSURANCE