previous arrow
next arrow
Slider

ChatGPT Hallucinations Could Open You Up to Cyberattacks

 Published: June 9, 2023  Created: June 9, 2023

By Adam Rowe

ChatGPT and other generative AI programs are always confident, but they’re not always right.

A new technique takes advantage of “AI package hallucination” to get ChatGPT to trick developers into downloading malicious code libraries.

Helping developers out with coding suggestions and grunt work is one big benefit of the popular AI solution ChatGPT. But bad actors could use the fact that ChatGPT is full of suggestions to their own benefit, researchers believe: They just have to reverse-engineer a malicious code package that ChatGPT is likely to recommend.

We’ve only begun exploring the impact of artificial intelligence on the world, but it’s already giving cyberattackers a leg up on the ongoing arms race between security tools and security circumventors.

How the New ChatGPT Coding Scam Works

The trick takes advantage of a quirk common to the large language models that power generative AI like ChatGPT: They love to make stuff up.

ChatGPT is a predictive bot, so it will come up with the answers that it thinks make sense based on available data. However, this means that it often generates a “hallucination” — a response that seems reasonable but is a completely false claim that falls apart when fact-checked.

If a bad actor grills ChatGPT about code libraries (or “packages”) until the AI makes up a fake one, that hacker can then create a real version of the previously non-existant library. Then, the next time that ChatGPT mentions that library to a user, that user might try downloading the only available version… the malicious one.

The new scam has been detailed by Vulcan Cyber’s Voyager18 research team, which has a full explanation available on a recent blog post.

How to Stay Safe From AI Package Hallucinations

Once you know about the threat posed by package hallucinations, there’s one simple fix: Double-check everything that ChatGPT tells you before you actually believe it.

That said, in this specific case, the Voyager18 team has more suggestions that could help.

ChatGPT is a great tool to help you respond to an error message or to create a brand-new code, but developers shouldn’t rely on it for more important coding projects.

Tips for Using ChatGPT

ChatGPT and other generative AI programs are always confident, but they’re not always right. Everyone who doesn’t realize that is at risk for believing a lie, which could open them up to anything from embarassment to legal trouble to, as the Voyager18 team uncovered, a major coding security breach.

When using ChatGPT, we recommend:

  1. Fact-check your results
  2. Look up the latest ChatGPT scams to avoid
  3. Never give ChatGPT sensitive company or personal data
  4. Know what ChatGPT errors will look like
  5. Don’t assume ChatGPT’s results are copyright free
  6. Don’t use ChatGPT for math

And, above all, don’t let the “intelligence” part of “artificial intelligence” distract you from the other word in that phrase.


https://tech.co/news/chatgpt-package-hallucination-cyberattacks


No Thoughts on ChatGPT Hallucinations Could Open You Up to Cyberattacks

Leave A Comment