Op-Ed: Is Artificial Intelligence a Major Cyber Threat for 2024?


By Ed Watal, Founder & Principal — Intellibus

The world has become incredibly divided on the issue of artificial intelligence. While many have hailed the technology as the “future of work,” others have expressed concerns about its implications for the world. 

The true nature of AI is somewhere in between. 

Artificial intelligence is neither exclusively friend nor foe but a tool that can have a positive or negative impact — depending on how it is used and by whom. It is too late for us to turn back the clock on AI, so we must look forward to minimizing the misuse of this extraordinary innovation.

Striking a balance between the good and the bad of AI

Leaders in several fields have praised artificial intelligence for its ability to make operations much more efficient. When used responsibly, artificial intelligence can help workers make their jobs easier and allow businesses to be much more efficient and profitable. Many of today’s AI models are also incredibly flexible and adaptable to any number of industries based on their unique needs and use cases.

However, critics have lingered on some of the more harmful uses of AI technology. After all, if anyone can leverage artificial intelligence to make their jobs easier, it stands to reason that wrongdoers — like hackers and scammers — will also be able to do the same. Unfortunately, this is the downside of the heavily customizable nature of AI. People can find ways to use this technology to cause harm.

That being said, it is necessary to note that artificial intelligence is not a cyber threat in and of itself. Rather, it is the wrongdoers who abuse the technology who give it a bad rap. AI is no different than any other innovation in history — some people will use it the right way, and others will abuse it for their own benefit — and we cannot let this minority of people using AI to hurt people prevent us from embracing this paradigm shift that has the potential to change society as we know it.

Yet by identifying and understanding the cyber threats posed by these dangerous use cases for artificial intelligence technology, we can approach cybersecurity more proactively. There is a future where we can freely use artificial intelligence technology to help achieve unparalleled levels of efficiency and productivity, but this requires us to mitigate the uses of artificial intelligence that cause harm to others.

The abuse of generative AI for phishing scams and deepfakes

One of the most popular forms of artificial intelligence today is generative AI, also known as large language models or LLMs. These programs allow users to generate text in seconds that would take them minutes or even hours to write themselves. Although early versions of the technology created flawed outputs that were clearly distinguishable from human-created materials, the training and improvement of the technology have allowed the materials they synthesize to become impressively high-quality.

Several legitimate uses for generative AI have proven useful in many industries. For example, many AI models have emerged that can be used to write everything from sales pitches to entire articles. Chatbots are another popular application of generative AI technology that allows businesses to automate their customer service process. Using these tools, workers and businesses can automate some of the more menial tasks of their duties, allowing them to streamline operations, increase productivity, and focus more of their efforts on the parts of their jobs that only they can complete.

Nevertheless, we should be aware of some dangerous use cases of this technology, such as scammers who are using generative AI to help improve the quality and efficiency of their phishing schemes. Phishing schemes aim to impersonate other people to trick the recipient of a written message into unwittingly giving up their personal information, but with the help of generative AI technology, scammers can make phishing messages more convincing than ever before.

In the past, it was somewhat easy to identify fraudulent messages because of mistakes like grammatical errors or inconsistencies in voice. Today, however, a scammer could train a generative AI model on a library of legitimate messages written by the person they are impersonating. As a result, the model can then create a convincingly written message in the voice of the individual whose material it is trained on. As this technology continues to improve, distinguishing between authentic and fraudulent messages is becoming much more difficult.

Even more alarming is generative AI’s ability to create convincing audiovisual materials impersonating a real individual, known as “deepfakes.” Using a person’s image or voice likeness, nefarious actors have created AI-generated images, videos, and audio clips that have been used for all sorts of illegitimate uses, from blackmail and reputational damage to even manipulation of markets and political races.

In the business world, the implications of deepfake technology can be dangerous, as a scammer could create a deepfake of a financial advisor’s client authorizing a transaction, which could cause financial losses. Even worse, wrongdoers could falsify audio clips or images to attempt to sway the stock market. 

For example, deepfakes could be used to falsify a new business partnership, causing stock prices to skyrocket. This is not only unethical but also potentially illegal.

These examples are only the mere tip of the iceberg regarding the harm that deepfake technology can cause. Some of the most infamous uses of deepfakes are for the spread of misinformation, which is especially dangerous when it comes to public figures. Wrongdoers have used deepfakes to effectively “steal” a person’s likeness to create false endorsements, while others have attempted to use deepfakes to cause reputational damage. During political races, in particular, deepfake content of political candidates could potentially change the entire tide of election cycles — and the world stage as we know it.

Exploiting AI’s data analysis capabilities to automate cyber attacks

Malicious actors have also found ways to exploit the data analysis capabilities of AI technology for their own nefarious gain. Although data analysis may seem relatively innocuous on its surface, an artificial intelligence model can process data at much faster rates than a human, and wrongdoers can use this capability of AI to analyze data that should never fall into their hands, such as a network’s security and access data.

One dangerous use case for artificial intelligence allows hackers to automate their cyber attacks. By training a model to continuously probe networks for vulnerabilities, hackers can identify and exploit vulnerabilities faster than they can be remedied. What may have taken hackers hours or even days in the past to unravel can now be identified near-instantaneously with the help of artificial intelligence, so we should expect the volume of cyber attacks to increase significantly over the coming years.

In many cases, hackers will use these automated attacks to target supply chains. When a cyber attack is levied against one link in a supply chain, it can cause a ripple effect throughout the entire chain and industry. 

For example, if a hacker targeted the shipping network that is in charge of delivering raw materials to factories, the effects of this attack would be felt across manufacturers, retailers, and consumers. The potential destruction these attacks could cause is massive and terrifying — especially if the target is critical infrastructure

We must remember that we live in an interconnected world. Many systems are operated by computers, presenting an access point that hackers could exploit to access and manipulate the system. The potential loss of life that could come from an attack against power grids or telecommunications systems, or the economic ruin that could be caused by attacks on financial markets, is virtually unimaginable. Unfortunately, many organizations and government entities are unprepared to handle these attacks and their aftermath.

Fighting fire with fire in the realm of cybersecurity

Thankfully, the technology used by cybersecurity experts is advancing just as quickly as that used by malicious actors. In many cases, the same technology being used to commit crimes can be turned on its head and used for more beneficial purposes. 

For instance, rather than probing networks for vulnerabilities that hackers can exploit, models can be trained to probe networks for vulnerabilities and alert operators of what needs to be repaired. Other models are being developed to analyze text, images, and audio to evaluate their authenticity.

However, the most potent tool to combat these cyber threats posed by artificial intelligence is education. Organizations that want to reduce their vulnerability to AI-powered attacks must keep their employees informed of these new threats as they emerge and develop by training them on proper cybersecurity procedures, such as strong password usage and access control, and how to identify potential phishing schemes from suspicious materials. 

Artificial intelligence is an innovation that can make a real difference in the world, but to reap the benefits of this powerful technology, we must also understand and mitigate its potential consequences. Wrongdoers have already found ways to leverage the technology in ways that serve their nefarious causes. 

By understanding the cyber threats that these dangerous use cases of artificial intelligence can pose, along with the methods we can use to stop them, we can pave the way for a future of AI where it is used as a tool to make the world a better place rather than as a source of fear and damage to our society.

— Ed Watal is the founder and principal of Intellibus, an INC 5000 Top 100 Software firm based in Reston, Virginia. He regularly serves as a board advisor to the world’s largest financial institutions. C-level executives rely on him for IT strategy & architecture due to his business acumen & deep IT knowledge. One of Ed’s key projects includes BigParser (an Ethical AI Platform and an A Data Commons for the World).  He has also built and sold several Tech & AI startups. Prior to becoming an entrepreneur, he worked in some of the largest global financial institutions, including RBS, Deutsche Bank, and Citigroup. He is the author of numerous articles and one of the defining books on cloud fundamentals called ‘Cloud Basics.’ Ed has substantial teaching experience and has served as a lecturer for universities globally, including NYU and Stanford. Ed has been featured on Fox News, Information Week, and NewsNation.



We will be happy to hear your thoughts

Leave a reply

0
Your Cart is empty!

It looks like you haven't added any items to your cart yet.

Browse Products
Powered by Caddy
Shopping cart