previous arrow
next arrow
Slider

Are You Concerned About Generative AI Becoming A Cybersecurity Risk?

 Published: May 4, 2023  Created: May 4, 2023

By Matthew Morgan

In my previous Forbes article, I explored how AI-assisted cybersecurity platforms have become a common discussion topic in my conversations with CISOs and CIOs.

In that article, we examined how people see the rise of artificial intelligence-enabled cybersecurity platforms opening up new possibilities for organizations to dramatically improve their ability to stop breaches and defend their systems.

I received a lot of great feedback on the article, including a common question and concern: Can adversaries in the realm of cybersecurity use this new generation of “generative AI” tools to create new risks for business systems?

While, in general, artificial intelligence has the potential to enhance security measures and more effectively detect threats, there is an emerging concern that generative AI tools can—and likely will—be leveraged by malicious actors to launch attacks that are more sophisticated and harder to detect.

Yes, Generative AI Is Cybersecurity Risk

This new category of AI tools, referred to as “generative AI,” can generate human-like responses to user inquiries. These tools are trained on enormous volumes of data models, which makes them capable of generating natural-sounding responses, technical code or other key information as it responds to a wide range of user inputs.

Although these tools were developed with the intention of being used for natural language processing tasks such as language translation and question-answering, it is not difficult to imagine how cybercriminals could leverage this technology for malicious purposes.

As an example, one could see how generative AI would enable adversaries in spear-phishing attacks. The goal of the adversary deploying this attack is, of course, to trick the recipient into clicking on a link or downloading a file that contains malware. If an adversary uses generative AI to generate convincing text for the email, attackers could increase the likelihood that the recipient will fall for the scam. Consequently, the (ab)use of this tool may reduce dependency on dark web services that offer content localization services, potentially leading to an increase in the volume of automated localized spam and phishing/fraudulent content.

Another potential adversarial use case for generative AI tools is in generating convincing chatbot conversations. Chatbots are becoming increasingly common in customer service and support roles, but they can also be used for malicious purposes. By using generative AI to write the responses for a chatbot, adversaries could create a more convincing and realistic conversation that is better able to deceive the user. This could be used for anything from stealing sensitive information to gaining access to a company’s network.

Other possibilities could include generating programming code to use in application workload-oriented attacks, either through the exploitation of API vulnerabilities or injection efforts. Since generative AI can potentially equip adversaries with useful skills in complex coding, this becomes a real concern for organizations.

It’s also not difficult to imagine (ab)using generative AI to reduce the time-to-deploy of malware by easily creating obfuscated code or dynamically regenerating malicious code, making each sample unique and, therefore, difficult for traditional security mechanisms to detect.

How You Can Mitigate Cybersecurity Risks For Generative AI

So, what can be done to mitigate the risks associated with the use of generative AI by cyber adversaries? The first step is to raise awareness about the potential risks of this technology. Security professionals and leadership should be aware of the potential use cases for generative AI and vigilantly watch for any signs of suspicious activity.

It is also important to ensure employees are educated on the risks of social engineering attacks and trained to spot phishing emails and other types of threats.

Another risk mitigation option is the use of email filtering systems that are capable of detecting and blocking spear-phishing emails, as well as modern cybersecurity solutions that use machine learning capabilities to identify and block suspicious chatbot conversations.

An additional potential control point is for vendors that offer generative AI systems to implement technical controls to limit the potential for attackers to use generative AI for malicious purposes. This will be an ongoing challenge for the developers of those systems, as managing intent is particularly difficult.

In summary, generative AI systems are powerful AI tools, and they have the potential to be used by cybersecurity adversaries for malicious purposes. By raising awareness of the potential risks and implementing appropriate security measures, organizations can mitigate this risk and better protect themselves against such cyberattacks and related breaches and data loss.

As with any new technology, it is important to approach generative AI and its potential with a measured and informed approach to one’s cybersecurity posture.


https://www.forbes.com/sites/forbestechcouncil/2023/05/03/are-you-concerned-about-generative-ai-becoming-a-cybersecurity-risk/?sh=76e90aa31d44


No Thoughts on Are You Concerned About Generative AI Becoming A Cybersecurity Risk?

Leave A Comment