Seven Common Cybersecurity Mistakes Made With AI
By Nelson Cicchitto
Your company has started to use artificial intelligence (AI), but are you effectively managing the risks involved? It’s a new growth channel with the potential to boost productivity and improve customer service. However, particular management risks need to be assessed in cybersecurity. Start by considering AI trends to put this risk in context.
Why is AI an emerging cybersecurity threat?
Artificial intelligence is a booming industry right now with large corporations, researchers, and startups all scrambling to make the most of the trend. From a cybersecurity perspective, there are a few reasons to be concerned about AI. Your threat assessment models need to be updated based on the following developments.
Early cybersecurity AI may create a false sense of security.
Most machine-learning methods currently in production require users to provide a training data set. With this data in place, the application can make better predictions. However, end-user judgment is a major factor in determining which data to include. This supervised learning approach is subject to compromise if hackers discover how the supervised process works. In effect, hackers could evade detection by machine learning by mimicking safe code.
AI-based cybersecurity creates more work for humans.
Few companies are willing to trust their security to machines. As a result, machine learning in cybersecurity has the effect of creating more work. WIRED magazine summarized this capability as follows: “Machine learning’s most common role, then, is additive. It acts as a sentry, rather than a cure-all.” As AI and machine learning tools flag more and more problems for review, human analysts will need to review this data and make decisions about what to do next.
Hackers are starting to use AI for attacks.
Like any technology, AI can be used for defense or attack. Researchers at the Stevens Institute of Technology have demonstrated that fact. They used AI to guess 27% of LinkedIn passwords successfully after analyzing 43 million user profiles in 2017. In the hands of defenders, such a tool could help to educate end users on whether they’re using weak passwords. In the hands of attackers, this tool could be used to compromise security.
The Mistakes You Need To Know About
Avoid the following mistakes, and you’re more likely to have success with AI in your organization.
1. You haven’t thought through the explainability challenge.
When you use AI, can you explain how it operates and makes recommendations? If not, you may be accepting (or rejecting!) recommendations without being able to assess them. This challenge can be mitigated by reverse-engineering the recommendations made by AI.
2. You use vendor-provided AI without understanding their models.
Some companies decide to buy or license AI from others rather than building the technology in house. As with any strategic decision, there’s a downside to this approach. You can’t trust the vendor’s suggestions that AI will be beneficial blindly. You need to ask tough questions about how the systems protect your data and what systems AI tools can access. Overcome this challenge by asking your vendors to explain their assumptions about data and machine learning.
3. You don’t test AI security independently.
When you use an AI or machine-learning tool, you need to entrust a significant amount of data to it. To trust the system, it must be tested from a cybersecurity perspective. For example, consider whether the system can be compromised by SQL injection or other hacking techniques. If a hacker can compromise the algorithm or data in an AI system, the quality of your company’s decision making will suffer.
4. Your organization lacks AI cybersecurity skills.
To carry out AI cybersecurity tests and evaluations, you need skilled staff. Unfortunately, there are relatively few cyber professionals who are competent in security and AI. Fortunately, this mistake can be overcome with a talent development program. Offer your cybersecurity professionals the opportunity to earn certificates, attend conferences and use other resources to increase their AI knowledge.
5. You avoid using AI completely for security reasons.
Based on the previous mistakes, you might assume that avoiding AI and machine learning completely is a smart move. That might’ve been an option a decade ago, but AI and machine learning are now part of every tool you use at work. Attempting to minimize AI risk by ignoring this technology trend will only expose your organization to greater risk. It’s better to seek proactive solutions that leverage AI. For instance, you can use security chatbots to make security more convenient for your staff.
6. You expect too much transformation from AI.
Going into an AI implementation with unreasonable expectations will cause security and productivity problems. Resist the urge to apply AI to every business problem in the organization. Such a broad implementation would be very difficult to monitor from a security point of view. Instead, take the low-risk approach: Apply AI to one area at a time, such as automating routine security administration tasks, and then build from there.
7. You’re holding back real data from your AI solution.
Most developers and technologists like to reduce risk by setting up test environments. It’s a sound discipline and well worth using. However, when it comes to AI, this approach has its limits. To find out whether your AI system is truly secure, you need to feed it real data: customer information, financial data or something else. If all this information is held back, you’ll never be able to assess the security risks or productivity benefits of embracing AI.
Adopt AI With An Eyes-Wide-Open Perspective
There are certainly dangers and risks associated with using AI in your company. However, these risks can be monitored and managed through training, proactive management oversight and avoiding these seven mistakes.