Responsible AI governance foundational for scaling Gen AI in insurance

A computer keyboard with a screen on top that reads: Hi, I'm Copilot, your AI companion.
Adobe Stock

Insurance carriers are targeting new approaches for scaling generative artificial intelligence (Gen AI). While most carriers are still in the proof-of-concept stage, an increasing number of them are moving their Gen AI initiatives into production. In 2024, Gen AI IT budgets doubled, year-over-year across all insurance tiers, illustrating the commitment to this technology as a strategic objective.  

While Gen AI provides powerful new capabilities for insurers, it also introduces significant risks such as data loss, hallucinations, lack of transparency, copyright issues, and potential misuse. Scaling Gen AI requires not only technology investments and careful planning. It requires establishing a strong foundation on which an insurer's strategic development and deployment of Gen AI can thrive while mitigating risks and ensuring regulatory compliance.

Responsible AI governance should be the foundational construct of any carrier's AI strategy and needs to be considered at each and every stage of their Gen AI journey. This includes the initial definition of the Gen AI vision, selection of the required technology platform and stack, and the establishment of an end-to-end process for Gen AI development and rollout.

Core principles of responsible AI

The core principles of responsible use of artificial intelligence are recognized by industry organizations and governments globally, even as the AI regulatory landscape evolves. Insurers must implement these principles, by establishing a foundational AI governance framework that can help mitigate risk, prepare for stricter AI regulations, and help ensure that customers and employees trust the insurer's responsible use of AI.

These principles provide protections for: 

  • Accountability and oversight: Clear lines of responsibility and governance must be established for the development and impact of AI systems.
  • Data privacy and security: privacy and security measures must be established and maintained to protect the data of individuals.
  • Explainability and transparency: The logic of AI systems, and how they make decisions, should be understandable and explainable to those impacted by the AI's outputs.
  • Fairness: AI systems must be fair and ethical, without introducing bias or discrimination in their decision making or output. This relates to protected categories, including disability, age, race, color, religion, sex, or national origin. 

Insurers need to refer to industry guidelines for help in developing a responsible AI governance framework for their organization. The National Association of Insurance Commissioners (NAIC) Model Bulletin on the Use of Artificial Intelligence Systems By Insurers defines regulatory expectations for use of AI in insurance. The National Institute of Standards and Technology, part of the U.S. Department of Commerce, provides the NIST AI Risk Management Framework.

The role of an AI center of excellence 

The development of an AI center of excellence (CoE) or an AI competency center is a crucial component in scaling Gen AI and can help ensure responsible AI by functioning as a central hub of expertise, innovation, and governance for all Gen AI activities. 

The AI CoE can for example:

  • Establish the initial responsible AI governance framework.
  • Establish a unified Gen AI vision and strategy for the organization.
  • Help define the target state of Gen AI architecture, development and tooling.
  • Identify AI skills needed by the organization in order to support talent acquisition.
  • Foster collaboration and knowledge sharing to support AI initiatives.
  • Prioritize and support AI initiatives, guiding Gen AI projects through the entire lifecycle, from ideation to deployment. 

The AI CoE can also support change management considerations that will impact an insurer's use of responsible AI. In addition to the ethical (and potentially legal) concerns identified above, these may extend to practical considerations that impact the entire organization, including staff at all levels, along with customers. For example, Gen AI adoption doesn't just require technological changes, but appropriate change management for practical considerations such as talent and skills gaps/training or employees' concerns over job security. 

Gen AI architecture and stack

With this structural support and focus in place, a carrier can then move ahead and define the Gen AI architecture and technology stack they will need to responsibly scale Gen AI. The Gen AI stack can be thought of in terms of two major categories of tools and capabilities. The first focuses on the development or fine-tuning of the foundation models or large language models (LLMs) that provide the foundational Gen AI capabilities. The second category focuses on creating Gen AI applications that utilize the foundation models to provide use-case specific end-user solutions, such as call center chatbots, coding assistants, or underwriting copilots for example.  

How do you plan to implement the foundation model(s) or LLM that offers the GenAI needed? Answering this question will help scope the Gen AI technology strategy and define which components of the Gen AI stack are required. 

The majority of carriers are implementing commercial foundation models (like ChatGPT) along with retrieval-augmented generation (RAG) to securely blend their proprietary data with LLM capabilities. Others are looking for more control of the LLM behaviour and are either fine-tuning open-source models themselves or, in the case of a small minority, are looking to create their own models.

The model implementation approach selected will impact the Gen AI technology stack decision. For example, Gen AI application development platforms such as Amazon Bedrock or Microsoft AI Studio can support RAG implementations. However, if a carrier wants to be able to fine-tune or create their own LLMs they will also need large language model operations (LLMOPS) platforms and tools.

In either case, carriers must consider the responsible AI implications. RAG will require high quality data sources and necessitate strong governance of the proprietary data that is shared with the model to ensure accuracy, traceability and protection of confidential information.

For self-developed or fine-tuned LLMs additional tools may be required for responsible operation of the model itself. This includes model monitoring, bias detection, fairness and model explainability tools.

Another consideration for carriers is the purchase of third-party Gen AI solutions. They may choose to do this to provide differentiated solutions that they are unable to develop themselves or as a means to accelerate Gen AI adoption in their organizations. 

Maintaining a focus on responsible AI governance means that, when evaluating vendors, each vendor's AI risk governance processes must be verified. Because some states are introducing legislation that names the carrier the party responsible for data leakage, bias, and fairness issues introduced by third-party data and systems, any system that handles customer data or that interacts with customers should be examined carefully. In some circumstances, an insurer may need additional legal contracts for protection against regulatory and compliance exposure by its third-party vendor or partner.

Data readiness and the modern data foundation

Among the insurers that are currently leading the charge in Gen AI adoption, many were well on their way toward the modernization of their data infrastructures, with modern cloud data platforms in place. Modern cloud data platform architecture centralizes and supports end-to-end data needs of the carrier, offering modular design, supported by modern API architectures, expandability for scalable storage and compute, data acquisition and integration functionality, cloud data storage for centralized storage of all structured and unstructured data, cloud data warehouses and data lake houses for trusted data analytics, and integrated data governance. 

The cloud data platform provides the backbone of a modern data driven organization and is essential for the provisioning and management of data required to develop AI solutions. It also provides the infrastructure to host and seamlessly integrate the Gen AI stack. It is a key 'data readiness' ingredient for scaling Gen AI effectively and responsibly. It must be prioritized by organizations that are yet to modernize their data environment.

Scaling Gen AI effectively requires a multifaceted approach that involves modernizing existing data architectures, selecting the appropriate Gen AI stack, utilizing best practices such as an AI CoE, and a strong emphasis on responsible AI governance throughout.

For reprint and licensing requests for this article, click here.
Data governance Artificial intelligence Customer experience Underwriting
MORE FROM DIGITAL INSURANCE