Cyber Threat Simulation

Explore the frontier of fraud prevention and data security in our session on 'Fraud and Data Security Protocol.' Immerse yourself in a dynamic artificial intelligence simulation, where advanced cybersecurity protocols defend against malicious actors perpetrating CEO fraud through the use of deepfakes. Discover the latest in data security measures such as dual-factor authentication and data encryption, along with innovative fraud detection technologies like AI-driven investigations and virtual reality simulations. Dive into discussions on combating deepfakes and synthetic media, developing robust cybersecurity protocols, and collaborating with tech experts for proactive risk mitigation.


Transcription:

AI Clone of Patricia L. Harman (00:08):
Hi, I am Patricia Harman, Editor-in-Chief of Digital Insurance, and I welcome you to this session. Actually, that is not entirely true. I am an AI clone of Patty Harmon that our video producer created using only 30 seconds of video and audio of the real Patricia Harman. Isn't that neat? I can be Patty Harmon or I can be anything. I can be a Pixar version of her or something more appropriate to my nature. I can do a lot of things that are fun and exciting. I am still rough around the edges, as you can tell from my speech pattern. But the technology is improving at the speed of light. Soon I should be able to replicate any person with a quick browsing of their online profile. I could become anyone.

AI Clone 1 (00:50):
One seconds. I could become your boss

AI Clone 2 (00:53):
Or an authority figure

AI Clone 4 (00:55):
Or even a critical client for your business. Someone you simply can't say no to.

AI Clone of Patricia L. Harman (01:01):
What will you do when the more nefarious members of your kind get their hands on me? How will you know the images in the claims filing are even real? Whether a request for sensitive information is coming to you from your boss or your client is really coming from them. How do you verify that it's really your policy holder filing the claim and not a synthetic identity created with real and fake data? These are the questions to keep in mind when you consider how you will spot fraud in the age of synthetic reality.

Richard Wickliffe (01:36):
Oh, ladies and gentlemen, the real Patty Harmon and without exaggerating, what did you have to submit just for them to create that? Was it just one picture?

Patricia L. Harman (01:49):
So I shot several promos for this event for Will and for dig. They took 30 seconds of that and that was how they were able to create that video. And you really only need five seconds of audio to create a very realistic audio print.

Richard Wickliffe (02:07):
So in the real world, if you didn't know they were doing that and you just did a video on LinkedIn to promote a conference or something like that, that's more than enough for them to create a digital. And I noticed even the necklace you had was glimmering in the light based on your body movements and stuff like that. And when you announced that that was fake, then your eyeballs can sort of look at the mouth and you kind of see it. But without that context, you don't automatically know it. And your team put that together probably in less than an hour without being experts.

Patricia L. Harman (02:36):
They did. And I will tell you, I showed it to my mother. We had been talking about deep fakes and she's like, what's a deep fake? I don't understand. I can believe I'll believe what I see with my eyes. And so I showed it to her and she's like, oh my gosh. She said, I am your mother. And I can hardly tell that that is a fake

Richard Wickliffe (02:55):
When people say, I'll believe it with my own ears. Well, I can't do that. Well, okay, my own eyes can't believe that until they create a hologram that you could walk up to. We don't have to worry about that one yet. Anyway, my name is Richard Wickliffe. I'm more on the fraud side. I managed teams of SIU special investigation units on the fraud side for 20 years for the largest PNC carrier in the United States, which rhymes with eight var. So anyway, and I sort of stay on top of the fraud stuff as a subject matter expert. And of course the whole new realm and territory is the cyber stuff. It's a complete game changer. Sadly, I use my wife laughs when I say bad guys, she thinks it sounds like a cowboy movie or something, but there's no better word than bad guys or perpetrators, whatever.

(03:44):
But the bad guys are always a few steps ahead of us. We stumble onto something they've already been doing and then we try to come up with a way to combat it and then they get better. And I have a presentation later at one 30 that'll give a lot more examples of the history of how it came about and some of the companies that were behind it and some of the really big dollar frauds that have already occurred. And I'm usually an optimist, but I can say it's only going to get worse. It's going to get bigger and worse. And the first step sounds cliche, but knowledge is the power. There's people that just don't know this stuff is happening and the technology, even in the last year, I wrote an article about a year ago and it ended with Will lay people kind of be able to do this cloning. And of course in the last year now there's already apps on your phone where they advertise, sound like anybody sound like your friend look like kids can do it now. So that's the scary part. So I was going to propose some questions regarding your same whatever business you come from, whether it's on the tech side, insurance side, claim side, operations, underwriting for sort of some thoughts to consider in this sort of world.

(04:59):
Now consider this type of fraud of what you're just looking at, but on a much broader scale, how will you confirm or how can you confirm that images or any videos are real picture for an insurance claim? For example, what steps can you take to prove something is not a deep fake or is there anything you're doing now that's even tackling that?

Audience Member 1 (05:27):
Is there anything in the metadata of the images and the videos that we're collecting that we can look to see and detect provenance of where this is coming from? Has it been altered by AI or generated by AI? And is that something that we can easily access as laypeople or going through this and evaluating the authenticity of this?

Richard Wickliffe (05:48):
I sure hope so. And this is one of the reasons we're talking. There may be absolutely people in the room that have more knowledge about this than I do. Word has it, and this is from some FBI folks that I know that there is AI software being created to identify AI. It's like two computers fighting each other, but they check biometrics, which would be almost like a metadata sort of thing. Maybe there's something in the cadence or just something if something's too perfect or too smooth that it would identify it. Especially in the voice with the answer is I don't know the answer except they say that it's being worked on. And again, it's always reactive. It's not proactive. They're not going to invent this stuff until these attacks have already happened and they're like, oh wow, we better do something about it. Anybody else? Is there anything your companies are doing now to try to combat anything that might be fake or phony that you're getting?

Audience Member 2 (06:50):
We're early on as well, but obviously trying to educate the staff that this is out there through identification videos and things of that nature. Just speaking about the content, providing examples and things of that nature. And then just reminding them just to be very cautious that the nature of these is typically trying to trigger a request for sensitive data or information, passwords, payments, those type of things. And if it sounds unusual, just call the person that you think has reached out to you to verify whether or not it's official and meant to be.

Richard Wickliffe (07:40):
Some of the best ideas I heard are extremely low tech, just like what you said, ask 'em a piece of information that the perpetrators might not already have. Something basic, the town they grew up in. Well the stuff that's too basic, the bad guys can find that stuff on social media, but some personal information, calling them back, that's the most low tech way and sadly we have to resort to those things, but you have to do something in the meantime. An idea I heard, I'm just coming right from one conference, right to this one is the insurance carrier. Some of them have, they take pride in how fast they can handle a claim and estimate a claim and pay a claim. And a lot of 'em have their own apps now where they say upload any of your documentation or photos through our app versus through email or some other way. So let's say I'm showing them fire damage in my kitchen. I don't just do a video which could be easily faked. You have to film the video through their app. It's kind of a blockchain sort of thing that they're looking at it in real time and hopefully that would not give an opportunity for anything to be falsified. So there might be more of that, which we can't accept your pictures or video unless it's done through our portal maybe something like that. Anybody else? Any other ideas on this?

(09:01):
How is your company verifying and handling comments slash complaints or questions from customers or policy holders? Like if you get a complaint question or comment, do you just take it at face value and try to address it at that point or is there any measures to see if they're even legitimate in the first place? Any good answers from the previous session?

Patricia L. Harman (09:35):
No, but there was, going back to the previous question in terms of how to verify, one of the companies said that they will have the policy holder put something like a tennis ball or an apple or something like that in their pictures so that they could verify that they were actually taking it. Which if you think about it, again, it is a very low tech way to do it, but only someone who is submitting a legitimate claim through a company and has spoken to somebody there. That would be a really easy way to verify that yes, this is an actual picture versus something that's been created.

Richard Wickliffe (10:17):
Yeah, it's like the old crime dramas where hold up today's newspaper with today's headline on it sort of thing.

Patricia L. Harman (10:26):
Well, but if they don't know to put, yeah, but if they don't know to put it in, that's what I think that was the thought behind that.

Richard Wickliffe (10:36):
Yes. In fact I heard, I don't know if it's sort of a myth, but AI is getting smarter and smarter at a very high pace that the website you get to where click here to prove you're human. They caught the AI doing that in order to access the site. You're like, oh, is that crossing a line there? Anyway, you all know what CEO fraud is, okay, in spoofing the bad guys may take a domain that sounds very similar to yours and they changed the letters a little bit. CEO fraud. A lot of the targets on that are newer, younger, junior associates getting a call or email from a superior VP. In my experience it's at 4.59 on a Friday sense of urgency, pay this thing right away. What are any safeguards or protocols you're using at your company? Really this would apply to any company, whether you're a financial institution or an insurance carrier or an IT firm, you're all targets. Any things you're doing in the place to sort of curb any of that kind of spoofing?

Audience Member 3 (11:57):
Yeah, so I work at Milliman and we just get constant trainings probably once a month just to look for that kind of stuff. And then they send out these fake emails that I can readily see that they're fake. And then I think it just keeps you on your toes a little bit because you probably won't get an email from your boss saying that you need that money, especially if they're telling you to look for it. They just tell you to, why don't you call that person instead of just sending the money? Stuff like that. But I think it's just the constant reminders that it could happen.

Patricia L. Harman (12:27):
So at our company when people update their LinkedIn that they have started a new position as part of our employee onboarding, we've had to inform people that you may receive a text message. It looks like it's coming from our CEO and it actually happened to me. So for a good month and a half I'm with Everly Life Insurance Company and for a good month and a half I would receive maybe once a week a text message that is trying to get you to click on a URL and it says Jordan Teal, right? Well then I never responded and I started receiving one day I got a similar text message and it was from a female and I thought out of curiosity, I'm just going to check in. There are many Everly something companies. So it was Everly, well, which is a health insurance or a medical company and it was the CEO, the name was of the CEO. So that's unfortunate that we've had to do that. So there are companies scraping when you've started a new position, assuming you're not going to know everyone, but you're at least going to know who that CEO is.

Richard Wickliffe (13:40):
Do all of you get phishing test? I think that's so commonplace now. They just annoy everybody and your employees hate it and everyone moans and groans. But from a leadership standpoint, you absolutely see the necessity of it. And there's other stories I've heard, there are CEOs that have lost their jobs because of not enough cybersecurity and training within the companies. So you can roll your eyes, you got to deal with it. I know if it's like my company, if you fail one, you get rewarded with more classes and I have coworkers who are like, wow, I get like 10 classes a month and I'm like, I got one. Okay, interesting stuff. I was telling Patty, I attended a PowerPoint put on by the FBI with some the most latest information, a few little tidbits I found interesting. One is the spoofing where they'll try to sound like another company but have it off by a letter, that sort of thing.

(14:34):
There's a glitch with phone systems. When you get caller id, you look and it says, who's calling the caller ID fixes it thinking it's an error to the real name. So you might get a spoof call where it says AT and T, but it fixes it to ATTT because it thinks it's correcting someone else's error and I never really knew that one. But anyway, yeah, a lot of those things are happening. LinkedIn, it's a double-edged sword. You want to put your information out there and show the world everything you're doing and that's probably one of the lead locations where they get that information. Any underwriters here, the question is how can you verify information provided you is accurate for underwriting or for a claim? Claim handling that people are verbally telling you something? What verification measures do you have? I'll give you an example. And some of it may not be cloning or AI related, just ISO in general for prior losses, have you had any prior losses? Has the house had any prior losses? Is there any other verification software or ways you use for any claim or anything submitted? It may not apply to what you do, but it's sad we have to go through this many measures now that the honor system wouldn't really work in the business world for too long, sadly. Right.

(16:21):
In the light of these new risks, everything we're talking about here, has your company or firm offered any cyber fraud coverages or policies? Is there anything that protects your customers where you've had to add this because it's a new risk that didn't exist 10 years ago or maybe there's nothing there and it forces people to have to buy separate coverage? Anything you're doing now to sort of tackle this whole new realm? I see you nodding, so I'm going to come right here.

Patricia L. Harman (16:53):
Unfortunately I can't speak to it. Yes, we did add a cyber security endorsement to our policy. I can't speak to it though, but I have awareness to it. Yeah,

Richard Wickliffe (17:10):
Proof that you have to keep morphing with new stuff and again, it's reactive. It's because of stuff that they're already doing out there. I was also telling Patty some of the other tidbits I learned this week that was interesting is that in Southeast Asia, the enormous casinos all closed during covid and they said all the workers returned immediately to giant fraud factories were just relentless calls to the West Europe, United States where it just turned into that and they realized that was more lucrative and a lot of 'em stayed in that business, all goes for Eastern Europe. Also that whole area, COVID kind of sparked it like, wow, how can we now make money and just sit at our desk and not have to go anywhere? So a lot of that expanded. Another thing I learned, which imagine one person, Patty was there, looks like her, sounded like her, the FBI said there was a new case where it was an entire board meeting where it was six or seven board members, and then the one dorky target is the only real human that they're all asking him to do. God knows what, but it was an entire board that had been falsified. Each person meticulously crafted with their voice, their likeness and everything else.

Patricia L. Harman (18:33):
That's a little bit scary. One of the things I was going to share with everybody, we were laughing about the AI that they created about me. So there was a case, I am based just north of Baltimore and there was a case at a local high school where the athletic director had a falling out with the principal, created a fake rant of things that no one in that position would ever ever say. He sent it from his grandmother's email address to himself and to two other teachers on the Phys Ed team or faculty. One of those individuals shared it with a student and within 30 minutes it went viral with a ridiculous number of views and listens. The principal received death threats, his family did. He had to have police protection for several weeks. And I have a friend who teaches at that school and I said, it has to be AI.

(19:33):
She's like, oh no, we don't know that it is. They had to bring in the FBI and the FBI investigated for several weeks and then just a couple of weeks ago I heard that yes, it was definitely AI. They were able to trace it back to his computer at work. He had investigated how do I create an AI, an audio tape of an AI. And they had, within this tape, he was talking to a woman named Kathy, which was the name of the vice principal, the FBI questioned her. She said, we never had this conversation. I have never heard this man speak this way. But if you think about that in terms of the context, he is not going back to that school. They have moved him to another school, but in a matter of mere seconds, a man who has built his reputation over 20, 25 years had it decimated by someone who wanted to do something just really awful to him. So think about that in terms of your stock reports or earning reports for your companies because like we said, it only takes a couple of seconds of audio or visual video to just kind of create something like that. So it just really hit home for me how quickly things can change and how the tools, even though they're now developing tools like Rich said, to help identify fraud much sooner. You're right, the bad actors are still always a couple of steps ahead of everybody.

Richard Wickliffe (21:11):
When I was researching for this, one of the largest pharmaceutical companies, a broadcast went out where it said it's official. The government has now made it where insulin is free and it went viral. And the perpetrators on that, if they're into the stock market on short sales, they could all profit even if it lasts 24 hours before they figure out its fraud, someone could profit on that. And one that's almost comical if it wasn't sad is they said, Putin has already released these horrible deep fakes all in the west and the US of celebrities saying horrible things about the Ukraine and supporting Russia. And they're ridiculous. It's like Jennifer Aniston and the grammar's wrong, Ukraine bad, blah, blah, blah, blah, blah. And they're doing these a-list celebrities saying these horrible things in bad grammar, but sadly there's a percentage of the public that they believe anything they hear, they believe anything they read in this principle that you talked about, even though that was publicized, you all know that there'll be a really small percentage of that population that'll still sort of believe it about the guy. Kind of like I always knew it about that guy. It it's horrible. So what can you believe anymore now with the face and the voice and all that, does your company currently employ anti-theft or take it back anti-fraud training for staff, including the phishing test, which we've already talked about. Do you do it internally kind of like, Hey, we can do it ourselves, or do you hire a third party vendor? That's their sole job.

Audience Member 2 (22:46):
It's a third party vendor and the software solution that is being utilized has varying degrees of complexity around it. I think there's five or seven levels. And what's scary is you try to build targets into each level and it's like you're starting over again as the complexity increases because the obvious ones that you look at and go, well, that URL's crazy to where it's very, very difficult to make any sense of it and see that it's a fake.

Richard Wickliffe (23:28):
Yeah, I mean, and then you educate your family at home, don't you even I have three children, daddy that says, bank of America says I'm late. Do you have a Bank of America? No, I don't. You tell 'em to hover over from email and some of the bad guys are so bad, it'll be like buck69@yahoo.com. They don't even try to cover their trail, but they are getting a little more better at what they're doing. And last little tidbit I learned at that, the FBI calls on this week was if they do one where the email is so obviously spoofed that they do that on purpose in some cases because keep in mind they're blasting them tens of thousands at one time. The people that bite on those, it tells them that those people are basically stupid. And it's sad but true that it's so obviously fake, but people are biting on it. And then that's who they target to just milk. And that's pretty scary. And I always tell even people I work for, I go, I do that test like my mom, what would my mom do? My mom would be scared to death with some of this stuff. And that reflects a large portion of the public out there.

Patricia L. Harman (24:42):
You talk about your mom. So my poor mother has heard a lot about different kinds of fraud over the last couple of years to the point that when somebody calls her on the phone, she says, no, I know you're not real. Or I know you're just a fraudster. Leave me alone. And I just have to laugh at that. But it's important. The education piece is so important and in so many different arenas beyond just what we're doing in our companies.

Richard Wickliffe (25:10):
Yeah, I'm a little biased. I don't think it gets enough press time on tv. They'd rather talk about the Kardashians latest adventure or whatever. And this stuff's happening all the time and maybe it's not juicy enough for a lot of the news, but sadly I have the stat on my other presentation, but the FCC said it's something like three quarters of the targets pay something and it makes it the most lucrative fraud person based on the number of people that's been perpetrated. Yet it's voice and voice mixed with the visual. Very scary. I'll also talk about it in my next session, but when I wrote an article about this about a year ago, it was the high tech perpetrators that are doing it, and I ended it with Will a day Come where laypeople can do it and they only need a few seconds of your voice.

(26:07):
So I did a quick check on my iPhone and I took snapshots of just the first two apps that popped up and apps popped right up and they have things like sound like your friend sound like Anybody in the funny thing in the corner it says ages four and up. So they're even advertising that children can do this now. And your thing was proof Patty that they just took a snippet from a video that you had done for something else and you only talk for a few seconds. So that's the advance in just one year that's happened. So there we go. And you don't like being the dooms side of ai, so many positives with AI, but if you've seen Terminator Skynet, that's a real thing. Anything else Anybody'd like to add or a personal story of anything or with your company or any sad stories of anyone that's been hit that you know about?

Patricia L. Harman (27:07):
So I will say one of my coworkers, one of her children's, one of my coworkers, her friend received a call one day and the caller said, mom, I've been in an accident. The mom said, are you okay? I'm okay. I'm at the courthouse and they won't release me. I need you to wire this money. She said that while she was corresponding with what was in AI, as she was asking a question or waiting for a response, there was a few second lag that she then asked a question that was very random that she knew that maybe if it truly was her daughter would be able to answer and it couldn't, right? So she hung up right away.

(28:08):
Her daughter was supposed to be in high school, she called the school and said, pull her out of whatever class. I need to speak to her right now, make sure she's okay. The AI had scraped a TikTok video that her daughter had done and this wasn't a large school, this was a small school. And so that was pretty scary for me. So as a result, I don't use a personalized greeting on my phone anymore as a voicemail or if I don't know the number I used to answer the phone, this is Kristen. I don't do that anymore because you know what I mean? Or even if I answer the phone, they say, Hey, am I speaking with, I'll just say, who is this? I don't even answer the word yes until they quite know

Richard Wickliffe (28:57):
Isn't that sad but true that you can't trust anyone in the voicemail? Same thing. And how many people do this? If it's any number you don't know, no one answers anymore, you go, well, if it's an important, they'll leave a voicemail. And that's kind of sad. Like that old Jerry Seinfeld, when you call someone and you're mad that they do answer like, oh hey, what's going on? It's sad. And this sounds again, low tech and sad, but it's say, have a family password and it's very simple and easy and it's kind of sad we've gotten to that. But that's a very simple thing to do.

Patricia L. Harman (29:32):
I think about how we indicate the importance of a financial literacy program within the schools. Are there programs for cyber threat? Especially with all the social media?

Richard Wickliffe (29:48):
I think, boy, that's a good idea. I don't know of any, but that's a really good idea. So we're the only thing sitting in between them and lunch. Alright, I smell it. Thanks a million for participating. Thank you.