How the evolution of digital deception is changing cyber risks

cyber-security.jpg
bluebay2014 - Fotolia

History pages are full of stories of scams and fraudsters. The most infamous example is that of the Trojan Horse where the Greeks, unable to break down the walls of Troy, fooled the Trojans by offering them the gift of a giant hollow horse that held enemy soldiers inside. Or the example of the Oracle of Delphi, a legendary mystic clairvoyant, who pretended to be the voice of God and, in exchange for "divine" advice, fooled people into revealing their secrets and plans.

Fast forward a few thousand years and we find snake oil salesmen selling bogus pills, powers and potions that improve health, eradicate diseases, improve weight loss and grow hair. It mattered little whether these concoctions were nothing more than alcohol or flavored water; the real ingredient was always deception, sprinkled with persuasion and influence.

Adapting to changing technologies
As tools and technologies evolved, so did the tactics of tricksters. When the printing press came, it created an era of forged documents, counterfeit currency, and fake land deeds. The telegraph brought with it the age of wire fraud, where scammers posed as long lost relatives and bilked victims out of cash. Fax machines fueled the "Nigerian Prince" scam a.k.a. "the advance-fee fraud."

Mass media took hoaxes to a whole new level. A show on the radio convinced millions that Martians were invading New Jersey. When phones were introduced, pranks and blackmailing surged. Along with TV came infomercial scams, sweepstakes scams, and quick-rich schemes.

With great technology comes bigger scale and opportunities
The internet revolutionized how scammers operate. No longer required to engage with victims face-to-face, the omnipresence of the internet and the power of instantaneous communications and anonymity allows scammers to deceive on a much larger scale and faster pace, making them almost impossible to trace. Phishing emails emerged, carefully crafted to resemble legitimate companies and trick recipients into divulging sensitive information and login credentials. Fraudulent websites became a go-to-strategy, poised to steal credit card data under the pretense of "unbeatable deals." The internet continues to provide fertile ground for social engineers, scamsters and impersonators.

Social media: The ultimate disinformation platform
Social media was once lauded as a tool for building connections and communities. But armed with billions of users, these platforms have morphed into a hotbed of misinformation. The more shocking or emotionally charged the content is, the faster it travels, even when it's a complete fabrication.

And it's not just the content. Social media algorithms are designed to keep users engaged and scrolling, often creating echo chambers and filter bubbles that reinforce negative beliefs, amplify biases and blur the lines between fact and fiction. This phenomenon increases polarization and erodes the very foundations of trust and truth.

The inflection of AI and deception
As if the deception landscape wasn't already complex enough, the explosion of artificial intelligence adds another dimension to the problem. AI technologies such as machine learning, natural language processing and generative AI are making it easier to create, enhance and automate deception at unprecedented scale.

For example, deepfakes can make world leaders say things they never said, portray celebrities in situations or circumstances that never occurred, or mimic the voices of CEOs with near-perfect accuracy. AI-powered computer-generated imagery (CGI) can conjure events out of thin air, complete with fake news footage that appear all too real. And this is not limited to images, voices or videos. AI can also generate fake text with frightening accuracy, churning out convincing articles, social media posts, and even entire websites filled with disinformation at a speed and scale no human could match. Armed with small bites of information about the victim's interests and beliefs, AI chatbots are often more effective at manipulation than a real person could be. In other words, the more an AI model knows about you, the better it is equipped to tailor its lies and exploit your unique biases and blind spots.

Future implications of synthetic media
Technology that creates highly realistic fake content is advancing at a breakneck speed. Even though tools that create basic deepfakes have existed for years in varying degrees of sophistication, today's synthetic media has reached a point where it is virtually indistinguishable from reality. The potential applications are so dangerous and disturbing that the U.S. government believes it could pose an extinction-level threat to humans. The possibilities for harassment, intimidation, reputational damage, incitement, gaslighting, disinformation, social engineering are approaching peak levels.

Human intelligence is critical for combating digital deception
Human intelligence and intuition are extremely versatile. Our ability to learn, to reason, to sense risks, to read between the lines, to think creatively and find unconventional solutions is unparalleled, even when compared to AI. But human superpowers also need to be honed, polished, perfected and practiced. Organizations must prioritize continuous cybersecurity education so that employee minds are trained at sniffing out security risks and practicing critical thinking. Cybersecurity training is the most cost-effective and proven method that helps organizations confront the risk of online deception head-on.

The confluence of the internet, social media and AI has allowed for deception to flourish. Individuals, organizations, governments, legislators and society at large have a collective and shared responsibility to educate themselves, to resist malicious influence, to make well-informed decisions and engage with each other around the world based only on verifiable truth. This is a vital form of self-care and social responsibility in the 21st century. And it all starts with awareness.

For reprint and licensing requests for this article, click here.
Cyber attacks Cyber security Artificial intelligence Machine learning
MORE FROM DIGITAL INSURANCE