Swiss Re's Robert Burr explains how to build digital trust

An employee inside the Voyager building at Nvidia headquarters in Santa Clara, California, US, on Monday, June 5, 2023. Nvidia Corp., suddenly at the core of the world's most important technology, owns 80% of the market for a particular kind of chip called a data-center accelerator, and the current wait time for one of its AI processors is eight months. Photographer: Marlena Sloss/Bloomberg
An employee inside the Voyager building at Nvidia headquarters in Santa Clara, California on June 5, 2023.
Photographer: Marlena Sloss/Bloomberg

The more digitally advanced a country is, the less people trust AI, according to a study from Swiss Re. The report focused on digital trust and the various factors that can impact how people view technology. 

Digital Insurance conducted an email interview with Robert Burr, CEO at iptiQ, a standalone division of Swiss Re, and a B2B2C insurer, on what digital trust is and how insurers can build it with their policyholders.

Could you explain what digital trust is?

Robert Burr
Due to the high volume of online interactions these days, digital trust has become an essential good in many industries, such as in insurance. Digital trust filters down through every layer of relations with partners and customers, so from B2B to B2C. 

There are three factors that help define what digital trust stands for: Reliability – to build strong digital relationships, it's paramount to have consistent and reliable data. Related to this are cultural and generational attitudes such as age and other societal factors that influence the customer's willingness to trust a specific platform. 

Security – for users to trust a platform with their personal data, they must be sure that their data is safe and only used for purposes to which they have given consent. 

Reassurance – to make users feel comfortable with innovative technology, like AI, being leveraged to optimize insurance products, it's key to be empathetic and augment these new tools with human interaction at the appropriate time and place.

How can insurers build digital trust with customers? What can hinder that trust?

Digital trust is very personal and highly emotional, not always responding to logical reasoning. Therefore, it can be hard to explain rationally why a digital insurance solution is trusted or not by customers, but there are methods to address this challenge systematically. A very efficient way is to promote transparency in data use, for instance by outlining specific cybersecurity measures or by sharing data use and ethics policies. 

Some companies choose to offer incentives for those willing to share their personal data, but this must be supported by strong and transparent policies on how that data is used. 

For some customers, this will help foster trust, but not for all. Surveys point to there being many individuals not wanting to establish digital trust with insurers despite such explanations and incentives.

How is this related to data privacy?

This question is tackled by Swiss Re Institute's expertise publication Decoding Digital Trust 2. It attempts to break down decision making and asks why, in some circumstances, certain customers may be unwilling to share data with their insurer. 

Part of this answer, according to the report, lies in the speed of thought used by individuals in engaging with digital technology. The notion of being in control has considerable influence in judging the risk of sharing data. Individual psychological profiles are a further filter through which trust will be molded; as will the cultural and environmental background of the customer.

How may generative AI play a role?

Transparency is key to gaining our partners' and customers' trust in AI-driven insurance solutions. This requires a lot of effort in fostering explainability to help them make informed decisions while protecting their data and privacy. 

Our stakeholders want to understand how we use their data, and they demand fair as well as ethical decision-making, without compromising on human dignity. To make the use of AI as efficient and effective as possible, we need to consider the entire insurance value chain, including human factors. 

We firmly believe that the added value of AI will only come from a smart combination of technology and human processes, not just from standalone AI models.