Artificial Intelligence: What are the Major Cyber Threats for 2024?

By Ed Watal, Founder & Principal — Intellibus

Artificial intelligence has become one of the most contentious technologies in history. For every person who hails AI as the force behind the “revolution of work,” another criticizes it for its danger. 

The truth of this paradigm shift lies somewhere in between. While AI does pose some cyber threats, there are also numerous positive use cases that it would be unwise to ignore. 

It is important to remember that AI technology is not inherently a threat. Rather, those who use the technology for nefarious purposes are the problem. A fundamental truth of humanity is that whenever there is an innovation, people will find a way to abuse it for personal gain. However, if we can understand the potential threats caused by these negative use cases, we can pave the way for a brighter future where AI can be used to make the world a better place.

Abuse of generative AI for illicit purposes

Many of the most well-known AI models being used today, including OpenAI’s ChatGPT, fall under the generative AI category, as it generates an output in the form of text, images, or audio based on a prompt given to it by the user. This technology has numerous use cases, including drafting emails, powering customer service chatbots, and conducting research, although some dangerous abuses of AI technology could cause significant harm.

One of the most dangerous abuses of generative AI technology is phishing scammers who attempt to entice victims into revealing personal information by impersonating a trusted source. In the past, it was easy to sniff out these schemes because of mistakes like grammatical errors or inconsistencies in voice. Today, however, scammers can train an AI model in materials written by the party they hope to impersonate and create convincing language in their style.

From a corporate perspective, this can be incredibly dangerous if an employee receives an email they believe was written by their boss but is actually a scammer. If the employee cannot distinguish the fraudulent message from a legitimate one, they could risk exposing their own information and data as well as that of the organization, its clients, and its partners.

Generative AI’s capabilities have also recently expanded beyond generating impressively realistic text to creating convincing images and audio clips known as “deepfakes.” Deepfake technology has been in the news for its more public-facing uses, such as reputational damage, blackmail, and the spread of misinformation. Certain uses of this technology in the corporate sphere are equally frightening.

For example, if a scammer creates a convincing audio deepfake of a business’s client, they could use this deceit to authorize transactions. Voice authentication is no longer enough — wrongdoers have found ways to reliably and convincingly imitate an individual’s voice — meaning businesses must completely reinvent their security measures to ensure they do not fall victim to this fraud. 

How hackers are misusing AI’s data analysis capabilities

Another feature of AI technology that wrongdoers often exploit is its ability to process data significantly faster than humans. Using AI technology, people can now comb larger data sets with much greater accuracy and efficiency. While this is a benefit in many ways, it can be abused to cause massive harm.

Hackers have trained AI models to constantly probe networks for vulnerabilities they can exploit. Using this technology, hackers can gain access to networks faster than the network operator could remedy these vulnerabilities, making cyber attacks more prevalent and dangerous than ever. In a world where virtually every industry runs its operations on computers, the threat of automated cyberattacks is incredibly frightening.

Some of the most dangerous instances of these automated attacks are against critical infrastructure and supply chains. If a hacker manages to gain access or control to a single link in the supply chain, it can have a ripple effect through the entire network. But when these attacks are targeted at points like shipping routes, traffic lights, air traffic control systems, power grids, telecommunications networks, or financial markets, the ruin and loss of life that these attacks could potentially cause becomes profoundly devastating.

Combatting negative use cases of AI

Thankfully, the advent of AI technology is not all to the harm of humanity — many of these tools that wrongdoers use to wreak havoc can instead be used to combat them. For example, the same technology that hackers use to probe networks for weaknesses can be applied by operators to identify areas that need to be patched and repaired. People are also creating AI models that allow users to analyze text, images, and audio to determine their authenticity.

That being said, as powerful as some of these tools can be at combatting artificial intelligence’s negative use cases, our most potent weapon against wrongdoers is education. By remaining aware of the potential cyber threats caused by these abuses of AI, we can take a proactive, vigilant approach to our cybersecurity. Organizations can take steps to help their employees learn proper cybersecurity practices, like strong passwords and access control, or educate them on ways to distinguish between potential phishing attacks and legitimate messages.

Artificial intelligence is here to stay, and in the right hands, it has the power to make the world a better place. Unfortunately, wrongdoers will continue to abuse AI technology and find ways to apply it to their schemes. Understanding these cyber threats that hackers and scammers have created is the first step to fighting back and creating an ecosystem where we are free to use AI responsibly for the benefit of humanity. 

Ed Watal is the founder and principal of Intellibus, an INC 5000 Top 100 Software firm based in Reston, Virginia. He regularly serves as a board advisor to the world’s largest financial institutions. C-level executives rely on him for IT strategy & architecture due to his business acumen & deep IT knowledge. One of Ed’s key projects includes BigParser (an Ethical AI Platform and an A Data Commons for the World).  He has also built and sold several Tech & AI startups. Prior to becoming an entrepreneur, he worked in some of the largest global financial institutions, including RBS, Deutsche Bank, and Citigroup. He is the author of numerous articles and one of the defining books on cloud fundamentals called ‘Cloud Basics.’ Ed has substantial teaching experience and has served as a lecturer for universities globally, including NYU and Stanford. Ed has been featured on Fox News, Information Week, and NewsNation.