
Cybersecurity and Artificial Intelligence
By Marlon Vega
Cybersecurity and artificial intelligence are one of the most relevant topics today. With the increasing reliance on technology in daily life, businesses and organizations are exposed to a greater risk of cyberattacks. Artificial intelligence has proven to be an effective tool to improve cybersecurity risk management.
Cybersecurity and artificial intelligence are closely related.
First, artificial intelligence (AI) techniques can be used to improve the cybersecurity and resilience of products, services, systems, and, therefore, of companies and society.
A clear case where it can help is in the detection and prevention of cyber-attacks before they occur. Cyber-attacks can be very sophisticated and difficult to detect. For example, a phishing attack may appear to be a legitimate email from a trusted company. However, by identifying patterns in emails, you can detect possible phishing.
Second, AI is starting to be used by cybercriminals and other types of cyber attackers to compromise cybersecurity and perpetrate different types of attacks and generate fake news.
Finally, AI systems are, in turn, susceptible to cyberattacks; therefore secure AI systems must be developed to preserve privacy, which we can trust. Given the interaction between the two, the Cybersecurity, Artificial Intelligence, and R&D Strategies must coordinate to create techniques, methods, and tools that facilitate the design, development, validation, and deployment of AI-based systems with a multi-criteria approach that considers the cybersecurity of the data, the model, and the result.
Artificial Intelligence offers many possibilities where we can apply it. We are going to break down the possible good and bad use of AI in the field of cybersecurity and the dangers that its use can represent if the AI has not been designed in a safe way.
Artificial intelligence supporting cybersecurity.
AI is also being used to develop self-adaptive systems that allow automated responses to cyber threats. Indeed, AI can be used at all stages of intelligent end-to-end security: incident identification, protection, detection, response, and recovery; in this sense, cybersecurity can be considered one more area where we can apply AI, such as energy, transport, industry, or health. In fact, this is not a new field of application for AI, rather it has been used for some time to develop solutions that can detect and stop complex and sophisticated cyber threats while preventing data leaks.
The use of AI by cyber attackers
Today we are using AI in market applications to understand user behavior patterns and design commercial campaigns using AI software available to everyone, so it would be very naive to think that cyber criminals are not using it as well, in the simplest case, to get to know their victims better and identify the best time to carry out a criminal action, improving the chances of being more successful. The use of AI depends on the profile of the cyber-attackers, ranging from the most harmless associated with cyber-malice to the most dangerous, such as those related to cyber-terrorism, cyber-espionage, or cyber-war.
The same variety can be found in the level of sophistication and complexity of cyberattacks, which varies greatly from one to another. Behind the most dangerous and sophisticated cyber-attacks likely to use AI may be highly specialized groups financed by certain States, whose attacks may be directed at critical infrastructures in another country or to generate disinformation campaigns.
If we analyze potential attackers from another perspective, that of their way of operating, they can use AI both to exploit known vulnerabilities and to find or create unknown ones. In addition to the offensive use of AI by cyber-attackers to learn behavior patterns of future victims, it can also be used to quickly crack passwords and captchas, build malware that evades detection, hide where they cannot be found, adapt as soon as possible to the countermeasures that can be taken, as well as for the automatic obtaining of information using methods of natural language processing (NLP) and the impersonation and generation of false audios, videos, and texts. Attackers are also using generative adversarial networks (GANs) to mimic normal communications traffic patterns in order to deflect attention from an attack and quickly find and extract sensitive data.
Are AI systems vulnerable to cyberattacks?
Attackers can use AI systems to not only make their own decisions but also to manipulate the decisions made by others. In a simplified way, an AI system is still a software system in which data, models, and processing algorithms are used. In this sense, an AI system that has not been developed with cybersecurity by design can be vulnerable to cyberattacks that target the AI data, model, or algorithm and cause undesired results or wrong decisions in the system. Different companies have already suffered attacks on commercial AI systems.
According to Gartner data poisoning will be a huge cybersecurity threat in the coming years, Data poisoning involves tampering with machine learning training data to produce undesirable outcomes. An attacker can alter the training data in various ways, such as injecting false data, modifying existing data, or manipulating the weighting of specific data points. These alterations can cause the machine learning model to produce inaccurate results, leading to incorrect decisions and potential financial losses.
All identified attack tactics have been collected in a knowledge base of threats to artificial intelligence in order to support the collection, structuring, and reuse of knowledge about threats to AI systems and how to act to minimize the impact of these cyber threats. Along these lines, technology organizations, along with other private entities, launched the Adversarial ML Threat Matrix, an open framework designed to help security analysts detect, respond to, and resolve threats that may occur against machine learning systems.
Addressing cybersecurity from artificial intelligence
If we want to minimize the additional effort that will have to be made in the future to solve the problems generated, in this case when developing AI systems without taking cybersecurity into account, we must act throughout the AI development cycle and throughout the supply chain to create safe and fair AI systems.
Conclusion
Artificial intelligence has great potential to improve cybersecurity, but it also has certain drawbacks. On the plus side, it can be used to identify patterns and anomalies in large data sets, allowing for faster and more accurate detection of cyberattacks. It can also be used to automate the response to attacks and to better protect critical data and systems.
On the negative side, For example, machine learning algorithms used to train AI systems can be fooled by cunning attackers who change their behavior to avoid detection. There are also concerns that AI systems could be used to carry out advanced cyberattacks, such as social engineering, by generating personalized, authentic-sounding messages to trick users.
From all this, the conclusion that we can draw from the eruption of artificial intelligence in the world of Cybersecurity is that we have to take advantage of it and take advantage of it as another tool, that if we can provide it with a large amount of data from our organization, and working on the appropriate use cases will be of great help in automating certain more mechanical or tedious processes, allowing professionals to focus their efforts on contributing experience and knowledge at those points where they contribute the most value to the organization.
https://cybersecurity.cioreview.com/cxoinsight/cybersecurity-and-artificial-intelligence-nid-37732-cid-145.html