AI and machine learning in cybersecurity: Trends to watch
By Michael Cobb
IT security teams today struggle to make sense of the enormous amounts of data modern IT infrastructures generate and consume, while simultaneously prioritizing and responding to alerts. This enormous responsibility is one of the reasons why detection and remediation times are so poor. In fact, a malicious attack has an average lifecycle of 314 days from breach to containment, according to the “2019 Cost of a Data Breach Report”.
Manual and semiautomated checks and interventions cannot keep up with a constantly evolving threat landscape. And, with the average cost of a data breach estimated at $150 per record lost, according to the IBM study, a strong case can be made for automating many security tasks. Consider how the same study found that organizations without security automation experienced breach costs that were 95% higher than those with fully deployed automation and the case grows stronger.
Though AI and machine learning have been used in security products for some time, they are becoming increasingly important in modern incident response orchestration. Automation is critical for zero-day threat detection in particular as it can bring cyberattack response times down to milliseconds. Contrast this with human response times, which may take hours, days or even months. Critically, AI also helps security teams make more informed decisions.
In a Capgemini Research Institute study, 69% of senior executive respondents said they would be unable to respond to a cyberattack without AI. The same study found two-thirds of organizations plan to employ AI by 2020. Interestingly, many organizations categorized AI as a means of increasing revenues or reducing costs. This cybersecurity automation trend includes notable large companies, such as Amazon, Microsoft and Google, which are each incorporating AI into their internet-based services.
Automating identity and access management
As more organizations implement zero-trust security frameworks, identity and access management (IAM) will become more important than ever. Considering the new security perimeter, cybersecurity automation trends will likely include further advances in the use of AI to improve the effectiveness of IAM. For example, strong and effective passwords are impractical for everyday use. This often positions weak passwords as the only security measure that stands between a user’s data and a cybercriminal.
Another growing trend is the combination of supervised algorithms and unsupervised learning. This method proves effective in identifying anomalous behavior and triggering reduced or restricted access. It’s important to understand the differences between the two types:
Anothergrowingtrendisthecombinationof supervisedalgorithms and unsupervised learning. This method proves effective in identifying anomalous behavior and triggering reduced or restricted access. It’s important to understand the differences between the two types:
1.Supervised algorithms extract and learn from patterns in existing data to find relationships not discernible with traditional rule-based approaches. Identifying relationships enables the evaluation and risk scoring of new files that the algorithm has not encountered before, such as zero-day threats.
2.Unsupervised learning finds anomalies, interrelationships and links between unlabeled data sets or emerging factors and variables in order to extract hidden patterns. This can lead to threat predictions — an almost impossible job for human analysts.
AI and machine learning in cybersecurity will play an even larger role in protecting the entire customer experience — from account creation and login to service interaction. They will continue to improve the process of assigning risk scores to login attempts. They will also adapt responses to the nature and context of suspicious events — as opposed to locking out a user or terminating a session. Applications overall will be more efficient at recognizing genuine user and system activities, while containing and mitigating real threats.
AI, machine learning and the edge
In the same way that edge computing has moved compute, storage and network connectivity resources closer to remote devices, AI and machine learning models will be placed on endpoints themselves. These models will feature collaborative information-sharing capabilities that enable dramatically faster identification and termination of threats in real time. For future AI and machine learning cybersecurity technology to be successful, the quality of diverse data sets that the products work with will need to be improved. Semantically enriched data provides more options for meaningful extraction and enables algorithms to produce more precise predictions.
As with any technology, AI and machine learning can be used for good and bad. Cybercriminals already harness the power of AI to analyze network defenses and simulate behavioral patterns to bypass security controls. The battle to protect AI-powered systems from being compromised will come center stage. One significant threat to AI technology is training data poisoning — when bad actors access training data and feed it incorrect data, resulting in faulty decisions. According to Gartner, 30% of all AI cyberattacks will use training data poisoning, AI model theft or adversarial samples to attack AI-powered systems.
Fortunately, AI and machine learning enable organizations to protect their users’ accounts with more than just a password. For this reason, AI- and machine learning-powered systems look poised to be a central aspect of future IT infrastructure investments. Note that they will require specialized hardware and software, as well as dedicated teams with the skills to manage AI and machine learning models and understand the accuracy, attributes and model statistics used to reach conclusions.
To spend budgets wisely, organizations will need to understand the different options available in order to choose the appropriate technologies for their particular environments. Even with AI and machine learning capabilities to automate security operations and increase detection rates, organizations still need human teams to able to understand the scope, severity and veracity of threats found and to prepare an effective response.