Reckless adoption of AI can help deep fakes grab business data

hand holding smartphone showing man's face on top of screen, being mapped on bottom of screen
Tero Vesalainen / Adobe Stock

Despite attention-grabbing deep fake fraud, cyber insurers are mostly focused on previously existing cyber risks like phishing attacks. Still, technology advances such as AI mean businesses have to think about technological defenses and what coverage they need. In March, Coalition, a cyber threat insurance company, added an Affirmative AI Endorsement to its coverage. The endorsement reimburses policyholders for losses due to fund transfer fraud, cybersecurity failures and related issues. Digital Insurance spoke with Tiago Henriques, vice president of research at Coalition, about how businesses and insurers can use the latest available technology to defend against ever more sophisticated cyber attacks.

This interview has been edited for length and clarity.

What technologies do businesses and insurers have to defend against deep fakes?

Tiago Henriques of Coalition
Tiago Henriques, vice president of research at Coalition.
A couple of different things. For voice deep fakes, we usually recommend that there are shared key phrases between people within the company. If I called you, I need to use the word "banana" in our conversation so that you know that it is me calling you. Alternatively, after you receive a call and you are on that call, interacting with the person, when you hang up, call the person back -- proactively take the action of calling the number that you know is good and confirm with the person – were we just on the phone? 

Those are some of the recommendations we give for deep fakes, voice-related deep fakes. On email, take really strong precautions on things that seem too good to be true. Do second checks. For example, if you get an email that says, "I'm your vendor, here's the invoice, you owe me $10,000," confirm with a second person from accounting that we do owe that vendor $10,000. Don't just transfer the money without a second check. We are seeing a little bit of evolution where defenders themselves are starting to use LLMs [large language models] to fight LLMs, but it's going to be a constant cat and mouse game. It's going to be an interesting evolution to monitor, but it's going to be hard. 

Are these risks or losses covered just as losses from crime, as cybersecurity failures or in some other way?

What we announced is called affirmative coverage, meaning it's not necessarily a new coverage. It's just affirmatively saying that for these cases, we do cover it. We have lots of clients that are embedding LLMs into their products. There used to be something called a SQL injection, where an attacker would put [in] a very specific string to get a database to dump all the content that it has. 

What attackers now use against LLMs is a prompt injection, in which they can put in a very specific string to try to get the bot or the LLM to access some of their customer data. That would be covered. We affirmatively say, if you are a victim of a prompt injection attack, it is covered under your cyber insurance policy. This isn't new. This is a malicious attack, attacking a web application that happens to use an LLM.

What cyber vulnerabilities can AI technology create?

We are seeing customers adopt more and more AI, either for day-to-day use, or embedding it into their products. What we're not seeing is people being very careful. A lot of companies have customer data, and they embed these AI products without thinking about data privacy issues or how providers use data to retrain their own models. 

Make sure you read all of the terms and conditions and data privacy policies of your providers, and how they're going to use all of that data. Do you really need to send all of the data to the LLM? Can you separate customer data from generic data? The EU has tight AI data laws so it's really important to get that right.

How can AI be used to defend against cyber breaches?

Are we seeing AI as something bad, like the bad guys are using? Yes, but we're also seeing a lot of good use come out of it as well, and we're going to go through a rough time before things get better. We see scam calls, deep fakes, voice deep fakes, email phishing, all of those campaigns going up. 

We're going to continue to see that for a little while longer, because the technology is still cheap, so all attackers can just run LLMs locally on their computers. Part of the reason why we're not seeing more cases like the one in Hong Kong with a deep fake video, is because it still requires a lot of compute power to [create a] deep fake video in real time. It's not something you can just run at really high quality. It's not something an attacker can just run locally at home. You're not going to see that grow as much. But voice and text related attacks, we're going to continue to see go up and up and up.

Are businesses putting in operational technology that creates more vulnerabilities, whether from deep fakes or phishing attacks?

People are playing fast and loose with AI technologies. I've seen clients embedding chatbots on their website that have access to their entire customer database. You can literally go talk to the chatbot and ask it to list all of your customers, and the chatbot just drops all of the customers. 

People are not really thinking through all the role based access, and all the privacy configurations that are needed with these AI technologies, both internal and external. There's still a lot that we're learning about, like for example, data poisoning [when a cyberattacker manipulates the data a company is using to train its AI.]

We're starting to see LLMs being able to access and browse the internet. What happens if it browses a malicious website? For example, you ask what is the best candle maker website, and some hacker has a candle making website that has malicious code. The LLM suggests to your client, click here, it's this link, and you end up visiting that malicious link because of that. It's a very brand new technology that is being adopted like wildfire, and people are not really thinking through the security or privacy issues that come with it.

Is there something not as common that businesses and insurers should watch for?

What's putting the pressure on that is people are seeing other tech companies hyped for adopting AI technologies and cutting costs. Businesses are saying the economy is rough right now and they could really use cutting costs, so they decide to throw this magical fairy dust called AI and hopefully cut costs in half, without really thinking about what other problems are going to come with that.