Data is the lifeblood of the insurance industry, plain and simple. Insurers everywhere have embraced the digital age and are looking to new technologies - drones, the internet of things, AI, and blockchain – to obtain increasingly valuable data sets. Not only does this data help insurance carriers make better operational, risk and pricing decisions; but it also enables them to come to market with innovative business models.
For all the progress insurers have made in harnessing new data sources and improving their analytics capabilities, they are falling behind in one critical area.
The threat of unverified, inaccurate or manipulated data has never been higher and it’s creating a new vulnerability that could put insurers’ enterprises at risk.
Consider these statistics: 80 percent of insurance executives report their organizations are increasingly using data to drive critical and automated decision-making at scale. And yet,
This risk is amplified by the fact that, according to recent
What’s more, the use of artificial intelligence could heighten the data risks further as more insurers push towards autonomous decision-making using AI. If an insurer isn’t feeding these systems verified and trustworthy data, there may be instances where AI makes improper decisions that impact customers and their wallets.
Insurers need to address these data veracity issues or they risk finding themselves in the headlines as the latest company to misuse or falsify customer data. This could also lead to regulatory fines and a diminished standing in the court of public opinion.
But insurers don’t need to accept the risk of poor data veracity. They can address this new vulnerability head-on through the following steps:
- Build a data intelligence practice to ensure data veracity. This group will determine the embedded risk across a portfolio of data supply chains and set standards for how much risk is acceptable, based on business priorities and the implications of automated decisions. It should report up to the chief data officer and work closely with the CIO
- Have data grading capabilities: One of the first steps this practice should take is to ensure that the right data is being used throughout decision support systems and processes. This means understanding the behavior around data creation, such as the data trail created around a person driving a telematics-equipped vehicle or the sensor network for an industrial system. Understanding the baseline for data origination is crucial to responsibly record, use and maintain data—and to detect data tampering that can lead to poor decisions.
- Choose ecosystem partners carefully: The issue of data veracity takes on even more importance as insurers explore partnerships to better engage customers. Insurers may no longer own the data themselves; they may have to plug into another partner’s system to access it. Even if they own it, the nature of the ecosystem will likely entail sharing data among the partners, all of whom have an obligation to use and secure the information properly.
- Uncover processes that inadvertently incentivize deceit: This includes malicious actors, who are using automated technologies to manipulate data for their own gain. 34 percent of insurers report they have been the target of practices such as bot fraud, spoofed sensor data or falsified location data; while another 32 percent believe they most likely experienced such an attack but could not verify it. Moving forward, insurers will need to provide cybersecurity and risk management systems with a baseline of expected behavior around data. Doing so can help insurers be confident in the insights they generate from their data.
Insurers have real business opportunities due to the influx of new data generated, and customers benefit too by being incentivized to reduce their risk exposures and avoiding incurring losses in the first place. But for all the opportunities data presents, there are real risks if insurers aren’t properly validating their data. They would be wise to act sooner than later.