That's Not Your Boss on the Phone

Explore the evolving landscape of AI voice cloning fraud. We'll navigate the rise of AI-driven deepfake technologies, transitioning from visual manipulations to eerily precise voice reproductions. We'll explain the noble origins of voice AI and its potential nefarious applications, concluding with actionable insights on security measures including real-world incidents like a $243,000 corporate heist orchestrated through a convincing AI-generated voice. Discover how cybercriminals exploit online content for voice samples, and the notable 2019 heist predating public awareness of technologies like RealTalk. We'll address the escalating prevalence of voice-deepfake fraud ('vishing'), shedding light on business exposure and financial losses. Arm yourself against the growing threat of AI voice cloning fraud with vigilance, education, and verification techniques.


Transcription:

Richard Wickliffe (00:07):
Hello. Thank you for sticking around. I know it's Friday at the end of the day and I appreciate it. Like she said, I've worked for 25 years in the SIU Special Investigation units of one of the largest PNC carriers in the United States. 20 of those were leading teams and they were in South Florida, which I was housed in Miami, which you may or may not know has had a small fraud problem, but we're working on it. And now this whole new territory is the cyber and the AI stuff. And like I was telling the other group, we've had to be reactive instead of proactive. We're fighting it based on schemes we're uncovering and the bad guys are always one step ahead and we're just kind of chasing them. But that's the best we can do right now. So welcome to That's not Your boss on the phone.

(01:03):
Definition of deep fakes. The first ones you've ever seen are usually celebrities. It's on social media. There's one with Morgan Freeman talking about something or Tom Cruise doing something and it's visual deep fakes, and they were mostly used for comical purposes and then it slowly morphed into voice deep fakes where they start cloning the voices. And that's relatively new and it's been used in some pretty large financial heist, major frauds as well as the entertainment side of things. And now you're going to have the marriage of visual and voice together, which is really going to be a game changer. So who are you really talking to? You never really know who's on the other end of the phone. Definition of phishing. We all know phishing with the pH where it was so two years ago where you got emails and the email says your account's been hacked, or click here to sign into your Amazon or PayPal.

(02:03):
And now the emails were already old school. Now it's voice. You're getting phone calls. You may get vocal calls through your computer system on your phones and it might be people that you know that really well and it's their voice and it could even be a family member and it's actually their voice or sounds like their voice. Phishing is voice phishing. There's been an increase since the pandemic, more and more people working from home. These are computers that aren't necessarily protected the same way your company systems are, and they're relying on that fact as well as you're not meeting people in person. It was only through the screen. And now that Covid is over, we've decided to keep a lot of things, a lot more video meetings, video conferencing, even in the legal world with mediations and arbitrations and depots, everyone's figured out it's so much easier to not have to drive and park somewhere.

(02:59):
Let's just do it on the computer so it's not going to go away. Now you have financial losses, an increase with voice fraud with businesses. So this guy must have clicked something that he really feels like he shouldn't have done. The first major voice deep deepfake is because it's the first publicized one. I'm sure some have happened before, but it was in 2019. A CEO was conned. He transferred $243,000 because he got a call from the CEO of his company's parent company in Germany. He said he absolutely recognized his boss's voice. He said it had a vocal melody to it and he told him to transfer $243,000 immediately as soon as the money was gone, it was sent to international accounts around the world. That's untraceable. It was all fake. It wasn't his boss.

(04:02):
The increase in voice deep fakes with the rise of virtual workers according to the FBI, their last report was 2022. Frauds began to exploit virtual meetings and voice fraud increased over 10% in financial institutions compared to before the pandemic and it's only getting higher. An example, here's how they do it. First they compromise your CEO or executives email. How easy is that to get websites, public documents that can be found online, LinkedIn, stuff like that. Then using that email, they set up a fake virtual meeting with you and they may even put a still image saying they're having technical difficulties, but it is your boss's voice or you think it's their voice. In one case, they were advanced enough that the bad guy said, I might not sound right because I'm having some audio technical issues. And they said that to kind of cover up any glitches.

(05:08):
So that first one I talked about that was publicized was 243,000. That was 2019. In 2020 it increased to 35 million in the United Arab Emirates using voice cloning. The exact same MO, a manager was fooled by voice. They claimed they needed the money immediately for a business acquisition. And notice immediately they love making things where time is of the essence. You must hurry. I'm picturing 4:50 PM on a Friday. You got to get this paid right away. And soon as the money was gone, it was dispersed to multiple accounts around the world untraceable. So now 2023, and this is more on a human level versus a business insurance level. There's a huge surge in voice cloning to mimic loved ones that are in danger. This is your child, your parent, your spouse, and it's getting very scary.

(06:05):
Mom, these bad men have me is the exact words that Jennifer de Stefano heard from her 15-year-old daughter. Her daughter was in ski school, snow skiing school, and she got a phone call from her daughter. She took it immediately because she's always worried about a ski accident. The daughter screamed, mom, these bad men have me and said, I've really messed up. At that point, a man got on the phone and said, we have your daughter. And his exact words were, if you don't pay a million dollars, we're going to pump her with so much drugs and then dump her in Mexico. The mother obviously freaked out. He said, I don't have a million dollars. The mom said in an article, the voice sounded just like breeze, the inflection, everything. They demanded $1 million ransom or else. And then lo and behold, the daughter called the dad and they all texted each other and the daughter's like, what are you talking about?

(07:02):
Here's the actual text dad, someone has kidnapped Brie. Brie picked up and she's fine. She's calling you. She's fine for now. At least I don't know what that ransom thing was all about. So that happened. Very scary and that's a huge uptick. And in the last course we were in, someone in the room had a situation where that's happened to their friend. My dad, 80 something, received a call from his granddaughter while she was at college saying she was in big trouble and probably because he's my dad. He was smart enough to ask for, what do I call you? What's my pet name for you? And they hung up right away. So it's sad that in this day and age we have to have these safeguards, but one of the recommendations they have is have a family password.

(07:52):
So what's the risk and what can we do? According to the FTC, over three quarters of the victims lose money, so it makes it the most lucrative imposter scam on a per person basis. People share their voice in some form at least once a week on social media according to McAfee. That could be a business report, a video where you've been interviewed or just on social media where younger people are constantly uploading content, whether it's Instagram, TikTok, Facebook, and I hear different stories about how much of a snippet they need to fake the voice. And any of you that attended Patty Harmon's breakout class where they did an AI of her talking, they took a snippet just from a video. She made advertising this conference and they took just a 32nd clip in her voice and created a whole fake version of her use family passwords. And a big one is don't broadcast your travel plans out to the world. We have friends right now where their single daughters are traveling around the world and it's posted on social media. The perpetrators could easily use that to fool the parents knowing the daughters are thousands of miles away. You don't need to broadcast all your travel plans on social media, just upload all the nice pictures once you get home.

(09:21):
The FCC has made illegal for robocalls to use AI generated voices. Of course, you chuckle like I don't know how they're going to police that or govern that, but at least it's a sign that they're thinking about it and there's going to have to be new laws and regulations as this grows.

(09:38):
Any Joe Rogan fans in here, or at least you know who he is if you don't know who he is, he's a comedian that has a very popular podcast with over 2000 episodes. Well, a company in 2019 named Dessa out of Canada, they released a clip of Joe Rogan for two minutes talking about absolute nonsense including a hockey team of chimpanzees. Well, it wasn't Joe Rogan's voice. In fact, Dessa wasn't trying to fool anyone or perpetrate any fraud. They even said, Hey, we made a fake audio of Joe Rogan. They did it to advertise their services. The clip is still on YouTube. It's almost a perfect voice simulation. And how did they get his voice? It absorbed hours and hours with 2000 episodes. It had so much material for the computer to absorb that It even had the breathing and the ums and ahs, that sort of thing. And it's text to speech, meaning with this program, you only have to type in what the person's going to say. You don't have to sit there on a microphone and worry about that. You have the cadence right, or an accent or anything like that. You just can type the dialogue and the AI will say it. Now keep in mind, this is five years ago. Here's the example,

Video Presentation (11:02):Friends, I've got something new to tell all of you. I've decided to sponsor a hockey team made up entirely of chimps. I'm tired of people telling me that chimps are not capable of kicking human ass in sports. Chimps are just superior athletes and these chimps have been working out hard.

Richard Wickliffe (11:19):
That gives you the example. That was five years ago. So the technology is light years beyond that now. Now what's happening? I just randomly found this in about two seconds on Instagram. I don't know why they keep picking on Joe Rogan. This one is they're using his voice as a sponsor on a product that he's not getting paid to sponsor. And notice also, they add the element of time of the essence fences

Video Presentation (11:42):
And put food on your table. So you need to go and claim your $6,400 subsidy that they owe you in 24 hours. It'll be over. They're not giving it out after tomorrow. So you need to claim it right now while it's still there for you. Do it now.

Richard Wickliffe (11:56):
Do it now, hurry. Now I hear there's other celebrities also that their voices are selling products. I don't know if they have anyone has the resources to police this stuff. It's probably happening from offshore. I've never heard the celebrities trying to sue for it because I guess it's almost impossible to try to find the perpetrators Now, this is one, the 2023 Cyber heist. I like to call it the $15 million heist without using a gun just because it sounds more exciting that way. Now, was this George Clooney and his buddies going in and robbing one of the casinos? Nope. It was a bunch of 19 and 20-year-old kids that were probably in their parents' basement. They gave themselves a name scattered Spider, and it's like that line from True Lies. Why are they called that? Probably because it sounds scary.

(12:49):
They called a service desk. It was just a voice to voice call. It wasn't a hyper intelligent hacking job. Someone called the help desk, and I'm sure someone in this room probably has much more technical information about it than I do, but a source I spoke to said the best analogy he gave it was calling your company's help desk to reset your password, which we've all done. We've all called what happens? Call the help desk. Call the help desk. Well, someone did call the help desk, they reset security settings, then they got into the system. Now, if you know anything about the Vegas strip, MGM has about half the casinos. Caesar's Entertainment has the other half. Well, this was just this past September. They attacked MGM mgm. They like to say they went dark. What that means was most of their gaming machines turned off the elevators, failed people's room keys failed. The computers with people's payment methods failed. It pretty much crippled the whole operation. What a lot of people didn't know is that Caesar's, their main competitor, was attacked four days earlier by according to sources, the same culprits.

(14:02):
They changed the multi-factor authentication, reset for privileged users. They gained access and did all of this. There's speculation. They could have used voice cloning, which could have helped with the receiver believing who the other person was. There's other conjecture with whoever they were pretending to be. The information came off of LinkedIn because you can go on LinkedIn and find out who the head of cybersecurity is at Caesar's very easily. But it brought up a very interesting concept for any of you with cyber insurance. Do you pay the ransom or do you not pay the ransom? What are you taught as a kid on the school yard? You don't pay the bully your lunch money because what is he going to do? He's going to come back the next day. So these two companies had a huge dilemma and they handled it very differently. Caesars very quickly said, yeah, we're just going to pay the ransom.

(14:55):
It was originally 30 million. They negotiated it to 15 million and they did a one-time payment and all their systems magically came back and everything was fine. On the flip side, MGM took a hard line. They were borrowing the FBI's adage of we do not negotiate with terrorists. And they took us just the hard line approach. We're not paying anybody. It was calculated 8.4 million every single day that went by as they tried to fix their own systems. Now you wonder how this is even publicized because it seemed like as a company owner you'd be embarrassed that you got hacked and that you have these kind of losses. It could affect your stock value, affect your reputation. So I'm like, how did that leak so much information. That's when I learned companies have to file what's called an eight K report with the Securities and Exchange Commission within four days of any news that could possibly affect shareholders. So they had to admit right up front and admit all the dirty details. That's how we know about how it happened between C and MGM experts claim the heist could have been prevented with an AI powered solution that isn't vulnerable to emotional manipulation.

(16:16):
So that's 8.4 million equal to a hundred million plus 10 million in legal fees and IT fees versus the one-time payment of 15 million. So that's something to think about because with cyber insurance, the policies do have coverage for ransom payments. So it'll give you money for ransom payments. It'll be really interesting in the future what insurers say, which way you should go because it sounds counterintuitive to pay the bad guys, but by doing that, you are mitigating your loss and cutting it way less. And what's funny that I learned while researching this is there's some sort of code of honor among the bad guys that these cyber hackers, cyber thieves, they will say, if you pay us, we really will leave you alone. We promise. And for the most part they are. So it's really funny that they have this code of honor among the thieves and from all that we've heard after Caesar's paid, they never came back.

(17:21):
So you're wondering who in the world invented voice cloning and why would they do that? It just sounds like such a bad thing. Who are these people? Well, it was actually done for very noble purposes. Imagine speech synthesis for people who lose their speech due to an illness, a stroke, Parkinson's, et cetera. This is something that can help them regain a voice. Commercial uses, having a natural sounding voice, maybe endorsements. You could have a children's book read by their favorite celebrity, that kind of thing. An example was Val Kilmer. If you guys saw the latest top gun last year, top Gun Maverick, he had lost his voice due to throat cancer treatment and he had no voice at all. And a company recreated his voice. They had tons of samples just from his past movies of his speaking voice and it was seamless. Even I didn't realize that was AI until I heard about it.

(18:23):
Risks and future implications. Well, it may have been created for good things, but the bad actors are all over it and the technology is getting much better. As you can see, five years ago it was almost flawless and now it's even more advanced and easier. So use your imagination. Could a voice absolutely sink or totally tank your company's reputation or stock value? Imagine someone just saying something or you see video of them doing something that's absolutely horrible. And even if they finally in 48 hours figure out it was fake, you still have that much damage that was done.

(19:08):
Future of cyber crimes involving all forms where they meld it together. You both have the visual and the voice together, deep fake images and voice simulations. So how does this play into cyber insurance and just the insurance world in general? There's going to be an increase in homemade deep fakes. When the book that Patty was mentioning, it takes so long to write a book and then with the publisher it takes another year. I ended the chapter basically saying, will a day come when regular people can do this stuff? Well, that day is here now, for Joe Rogan's, five years ago they used thousands of hours of his voice. Now they claim they can do it with a few second snippet. So it's absolutely, the technology's already changed in the last year, corporate value, blackmail potential. If any of you were in the breakout session, Patty and I did.

(20:06):
A principal who is having a spat with the head coach of a high school team, they routed someone, routed an email with the coach saying certain words, so horrible they would be fired immediately. Well, it was proven to be completely fraudulent. They did a full investigation. The guy didn't say any of it. His reputation is ruined. He's having to work at a different school. And that was a very low key. So imagine on a corporate level or imagine a political level or in a country to incite upheaval, anything, do you pay or not pay? That's going to be something that the insurance carriers are going to have to figure out if they push for you to pay a demand or if you don't fake losses via images via accessible apps. So can just normal lay people do this. Now, the answer's yes. I work for a very large company, which they pride itself on how fast you pay a claim.

(21:10):
Because with all the competition in the industry, you want to be the best and the fastest. And it got to a point where we were paying auto damage estimates based on photos. The insureds will send us, it's a bumper damage. Send us a picture. They're able to write an estimate off of it, issue a check, close the file. Same with homeowner's, losses, go through your house, videotape the damage, show us pictures of the damage and that sort of thing. Well, doing a quick look just on my Apple iPhone two weeks ago I put voice cloning. These are only the first two that popped up. They even advertised sound like anyone clone anyone's voice easily. In the funny part is how it says ages four and up. So not only lay people, but your toddlers I guess can do this. And of course there's fine print for amusement purposes only, that kind of thing. Not to mention who's making these apps and where's your information going.

(22:10):
These have been around almost 10 years where it says amuse your friends. None of my friends would necessarily be amused by damage on my car, but some people you can add damage. Anyone heard of Midjourney? It makes images through ai. I play with that a little bit as an artist. I created this on the right, of course is the card that's totally not damaged and on the left is all the damage. Maybe an insurance carrier could look at that and decide how to estimate it, a weather to total the car. So if any of you thinking the picture on the right is real and the left is fake, the whole entire thing's fake. And I made that on mid journey in about 15 minutes and it's really high def. You could zoom in and everything is just perfect even in the background and everything. So there's a homeowner's claim. My living room was flooded. Those are two separate pictures. Again, took me about 10 minutes.

(23:14):
Same thing here. Water damage to my wood floors. That's completely fake. It's not even based on real pictures. It's completely phony. Took me minutes and if I can do it, anybody can do it. They've talked about some safeguards for that. And I'm coming back from another conference now and you have to start brainstorming these safeguards because this is new stuff. How do you combat it? And one idea I thought was good is where you insist on photos and video through the company's portal where you have to download the app for your company. And when you video your damaged living room, it has to be through the company's portal, kind of a blockchain safeguard or the photos have to be through their app when you click it instead of you just emailing pictures. And there's ways to check metadata and that sort of thing, but that's one idea.

(24:07):
Mitigation strategies, ongoing education. The cliche is true. The knowledge is the first step. You have to know this stuff exists. It's amazing when you talk to people that aren't in this business, they've never heard of it. You hear about elderly people or people that live out kind of away from technology and they don't know about this stuff and they're very easy victims and it's very sad that they're preyed upon two-factor authentication. I'm sure you guys all hate this like I do. I try to log into my work computer and it sends a code that I have to press that button on my phone and then it texts me a six digit number. Sadly, that's the world we live in, and that's the way to do the multiple authentication. It's just necessary. Understanding insurance policies and cyber fraud coverage. So if you're a producer and you're an agent, you're selling this, you're going to have to explain this stuff to your customers because this is new territory.

(25:02):
They understand slip and fall liability and property damage. This cyber stuff is completely new. In fact, when cyber policies first came out, when it was just new territory, there was an exclusion if your employee was involved in any way. So that way if your employee was fooled into sending the money, it was excluded. Oh, your employee did it. Now thankfully they've filled those gaps and they sell amendments and change the wording. It provides more coverage. But it's important to know that stuff. If you have a policy right now, you might want to think about that. Or if you're selling policies, important stuff. Now this is what I've been promised, but I don't have any details yet. Detection tools are in development to reduce voice fraud, including biometrics, deep fake detectors and anomaly detection analysis. That's a little over my head. I'm guessing it's going to be able to monitor the voice and be able to tell something about whether it's genuine or not genuine.

(26:05):
Maybe it's the equivalent of metadata in a sound format, and I'm hoping there's more of this, but it's proofing in that it's reactive. We're having to invent this stuff because we've been victims. Vishing tests. I thought I invented that. You do the phishing tester now. You get the bad emails all the time and you have to hit report. What about a ving test where people call you and say, hi, I am the guy from it. Can you go in and change the password and see if any of your employees fall for that? I think they ought to hire third party companies that do those sort of tests because that's exactly what attacked to the largest casinos in the whole entire planet and they fell for it.

(26:47):
Don't worry, it's not Skynet. We all know from Terminator movies that Skynet is definitely going to happen one day. So it's not all scary. And I know the rest of this conference was about all the positives and pluses with ai, and I'm excited about it. I've used ChatGPT when I need some grammar help. I love all of that. But sadly, there's some bad actors out there that are on the other side of this thing and they're finding a way to beat the system. And I learned a couple of interesting little tidbits this last week. I'm part of a group called InfraGard. It's run by the FBI and the business sector. And I attended a AI voice PowerPoint just on Tuesday, and I wrote down a couple of things I found interesting. One was picture one person deep faked, a CEO. You see their face, you see their voice, you hear their voice.

(27:38):
They did a complete boardroom meeting where maybe you're like the one real human that's about to get duped and you see on there your entire board, they crafted each individual human to look just like them and sound like them. Where it was a whole board meeting that was completely faked. I thought that was pretty scary. And here's another thing I heard that was a little freaky. You know what a spoof email is where it's an email and it's a little bit off from the real email, like instead of abc.com, it's AB_C, they change it just like a little bit. And these companies try to hack you with emails that aren't completely spelled right, that kind of thing. There's a glitch with phone systems that when you answer your phone and it has caller ID and it says where it's calling from, the phone systems fix it, thinking the people calling you, it's a typo on their end.

(28:32):
So the caller ID portion fixes it to the proper name. So if you get a call from a company pretending to be something else, but it's off by a digit, the caller ID might fix it where it doesn't catch it and it looks like it's legitimate. And that's something that I never heard about and that's pretty freaky. So thanks again for this. Be aware, tell your friends, tell your loved ones, tell your kids, get a family password. That stuff. Sadly, any crowd I talk to, there's one person in every crowd where this has happened to them or someone they know. And just awareness is the first step. So thanks a lot for coming in this Friday afternoon. I appreciate it.