Digital transformation is cementing itself in the insurance industry, enabling new avenues for profitable growth and revenue, streamlining operations and customer service to transform companies' business models. Innovative technologies, including
As insurance companies develop strategies to execute digital transformation within their organizations, it is essential to keep in mind several regulatory considerations regarding data and cybersecurity, technology and resilience, and fairness and inclusion to stay ahead of the evolving supervisory and regulatory trends.
Data and cybersecurity
Data collection and
As insurance companies invest in technology and software solutions to transform and improve their cybersecurity, understanding regulatory expectations should be prioritized. Legislators and regulators at the federal and state levels have introduced actions to frame appropriate guardrails on data and cybersecurity risks. The White House issued a 2023 National Cybersecurity Strategy, a plan consisting of more than 65 federal initiatives to fight cybercrime. Furthermore, the SEC adopted rules requiring public company registrants to disclose material cybersecurity incidents and annual information about their cybersecurity risk management, strategy, and governance. Since June of last year, 22 states have adopted the National Association of Insurance Commissioners (NAIC) Insurance Data Security Model Law requiring insurers to develop an information security program and investigate and notify the state insurance commissioner of any cybersecurity events. Lastly, on the international front, insurance firms operating or based in the U.S. that collect data from EU citizens, must comply with the EU General Data Protection Regulation (GDPR), data protection law.
At mid-year, KPMG identified regulatory challenges related to data and cybersecurity including a supervisory focus on accountability and potential limitations for data stewards (e.g., collection, protection, storage, retention, and use); attention to model inputs and outputs, including automated systems (data sets, opacity, design and results); requirements to safeguard and dispose of consumer/customer data; and access authorization and controls.
Tech and resilience
Innovative technologies and large language models, such as ChatGPT and other AI systems are heightening public policy and regulatory monitoring around the life cycle of AI systems. As technology evolves, new rules are being put forth to monitor potential risks.
The White House Office of Science and Technology Policy's released the "Blueprint for an AI Bill of Rights," a set of principles and practices to help guide the design, use, and deployment of automated systems to protect the rights of the American public. In addition, the NAIC formed the Innovation Cybersecurity and Technology (H) Committee to provide a forum for state insurance regulators to discuss innovation and technology developments and how these will affect consumer protection. With increasing regulatory activity and scrutiny being directed toward technology and resiliency, companies developing new systems should consider tech and system assurances, interactions with cloud and legacy systems, third-party risks (including ransomware and resilience), and "trustworthiness" — safety, efficacy, fairness, privacy, explainability and accountability— to ensure the systems meet the expected purpose and application as well as customer impacts.
Fairness and inclusion
Companies should prioritize fairness and inclusion when undertaking digital transformation. From an operations perspective, AI can streamline the recruitment process to make diverse hiring decisions, but there are inherent risks. According to two University of Cambridge Centre of Gender Studies professors who published a paper on race, gender, and AI-powered recruitment tools, AI systems have the potential to perpetuate biases and promote uniformity in hiring by favoring white or male candidates.
To protect against bias by automated hiring processes, New York City adopted a law requiring employers that utilize AI to conduct annual audits to check for built-in bias. California, New Jersey, Maryland and Illinois are currently considering laws that may limit the use of AI tools in hiring. With continuing focus on AI in recruiting, insurance companies should be mindful of the public policy discussions and regulatory pressure that may arise centered on the transparency of organizational commitments to diversity, equity, and inclusion, in addition to diverse supplier outreach and access and inclusivity.
From a customer-focused perspective, concerns around digital tools used for insurance underwriting have been rising. For example, AI is being used for life insurance underwriting to improve efficiency, but there are concerns from life insurance regulators about the risks of using external consumer data and information sources (ECDIS) such as credit scores, education, and occupation to establish lifestyle indicators to help determine how to issue life insurance policies. The risk is that these sources could lead to unfair discriminatory practices based on customers' primary characteristics including race, ethnicity, gender, disability, and more.
In April, four federal agencies including the Federal Trade Commission (FTC) released a joint statement on enforcement efforts against discrimination and bias in automated systems, citing that AI tools have the "potential to perpetuate unlawful bias, automate unlawful discrimination, and produce harmful outcomes." Similarly, there have been efforts to protect consumers from unfair discriminatory practices at the state level. In 2021,
Overall, digital transformation can provide immense opportunities for the insurance industry to modernize its approach to serving customers. However, as the industry incorporates new technology within their organizations, understanding regulatory trends across state and federal jurisdictions is critical for growth to achieve the promise of digital transformation.