Dispelling myths around AI within cyber security
By Peter Margaris
Peter Margaris, head of product marketing at Skybox Security, looks to dispel myths around artificial intelligence (AI) within cyber security. The perception that artificial intelligence (AI) can serve as a “silver bullet” against a developing threat landscape is rooted in the ongoing quest to find technologies that will be able to automate threat detection and response without the need for human intervention.
Capgemini even reported that 69% of enterprise executives surveyed felt AI is essential for responding to cyber threats in a report on AI and cyber security last year. But despite its promise, AI within cyber security should be approached with a discerning eye.
Often, when firms discuss the promise of AI as well as their current capabilities, the reality is they are practicing machine learning. Generally viewed as a subset of AI, machine learning algorithms build a mathematical model based on sample data to detect behavioural patterns that identify variants of attacks, and to make predictions or decisions without being explicitly programmed to do so. In the field of cyber security, machine learning techniques are most applicable in various detect and respond technologies and are utilised in SIEM, EDR, XDR, and sandboxing solutions. While valuable in those use cases, it becomes problematic when this machine learning capability is touted as artificial intelligence. AI goes much further to encompass devices that perceive their environment and take actions that maximise its chance of successfully achieving its goals, mimicking “cognitive” intelligence that humans associate with the human mind, such as problem solving.
It’s important to keep these differences in mind when evaluating what is truly possible with AI, the technology’s current limitations, and where to focus security programme strategy and resources.
The limitations of AI
There is a tendency to believe that artificial intelligence can solve all problems associated with enterprise security programmes. While there is a lot to gain from AI, there is a danger that companies are overly optimistic about exactly what the technology can deliver. When leveraged for the right use cases, AI has the power to move security teams away from the never-ending cycle of ‘detect – respond – remediate – reprogramme’, towards an approach to security that is more proactive, effective, and less like a game of ‘whack-a-mole’. But if companies invest in AI with the belief that it can fill the resource gap left unfilled by the ongoing cyber security skills crisis, then they are sorely mistaken.
The level of human interface required for AI tools is significant. AI isn’t able to stop zero-days or any advanced threats, it’s known to give false positives, and it isn’t yet able to learn rapidly enough to keep pace with the break-neck speed at which malware evolves. If the technology promises machine learning capabilities, it is wise to investigate whether the solution uses rule-based programming instead of intelligent machine learning algorithms.
If AI is deployed without a prescriptive process and resourcing plan, it’s plausible that threats will slip through the cracks undetected. And the resourcing plan will require a lot of heavy lifting to make sure that it’s running properly – these are tools that will likely end up using more resource than you’re willing and able to spare. When it is deployed, it is also feasible that an AI cyber security tool will be programmed inaccurately. In some cases, this could result in algorithms failing to spot malicious activity that could end up disrupting the entire company. If the AI misses a certain type of cyber-attack because certain parameters haven’t been properly accounted for, there will be inevitable problems further down the line.
Automation generates efficiencies, AI creates resource drain
AI shouldn’t be used to paper over the cracks. It will struggle to solve existing cyber security issues if the organisation deploying the technology isn’t set up with flawless foundational security. At a time when budgets are under great scrutiny as companies fight to ride out the recession, it is technology that can squarely be considered to be a ‘luxury’. Many of the problems that AI claims to fix persist, and experienced security analysts who can make impactful decisions with proper context and insights about their attack surface cannot yet be displaced by AI. Organisations are still challenged by a multitude of complexities. For example, there is an influx of new vulnerabilities with another 20,000 new known flaws predicted to occur by the year’s end. As these numbers of vulnerabilities continue to add up, future exploits are inevitable, but in many cases, with the correct protocols in place, are avoidable. To address this, firms need to introduce more effective remediation strategies and gain more insight into their fragmented environment so that they can build security programmes that will give them a competitive edge.
While AI can be time and resource-intensive, there are more efficient means of tackling these pressing issues. Automating processes – like change management, for example – will lighten the load of already-stretched teams. Introducing automation will free up resource and enable the development of a more considered cyber security function. Context-aware automation tools can be used in a host of useful ways. They can clean up and optimise firewalls, spot policy violations, assess vulnerabilities without a scan, match vulnerabilities to threats, simulate end-to-end access and attacks, proactively assess rule changes, and more. With the right tools in place, all of these processes can be automated and organisations can develop more efficient and informed ways of working. And while the great leaps forward promised by AI may be enticing, it’s advancements like these offered by analytics-driven automation that will deliver the greatest tangible benefits.
Automation today, artificial intelligence tomorrow
For now, don’t get misted by AI. As it stands, the concept of the technology doesn’t live up to the solutions that are currently on the market. It is, however, fully plausible that organisations will improve their foundational security, and AI technology will no doubt advance to a level where it can live up to its grand promises. This isn’t technology that should be outright dismissed – it will play a significant role in the future. However, now is not the time for risky investments – both in terms of financial outlay and time. In the current environment, businesses need to look at how they can bolster their security posture in a real, grounded and valuable way. Context-aware automation is an answer for today, when resources are tight and complexity abounds.