As artificial intelligence capabilities continue to grow at a rapid pace, AI technologies are becoming ubiquitous for both protecting against cyberattacks and also as an instrument for launching them. Last year, Gartner predicted that almost every new software product would implement AI by 2020. The advancements in AI and its ability to make automated decisions about cyber threats is revolutionizing the cybersecurity landscape as we know it, from both a defensive and an offensive perspective.
AI in Cyber Defense
As a subdivision of AI, machine learning is already easing the burden of threat detection for many cyber defense teams. Its ability to analyze network traffic and establish a baseline for normal activity within a system can be used to flag suspicious activity, drawing from vast amounts of security data collected by businesses. Anomalies are then fed back to security teams, which make the final decision on how to react.
Machine learning is also able to classify malicious activity on different layers. For example, at the network layer, it can be applied to intrusion detection systems, in order to categorize classes of attacks like spoofing, denial of service, data modification, and so on. Machine learning can also be applied at the web application layer and at endpoints to pinpoint malware, spyware, and ransomware.
AI/machine learning is already here to stay as a key component in a security team’s toolbox, particularly given that attacks at every level are becoming more frequent and targeted.
AI and Cybercriminals
Even though implementing machine learning technologies is an asset for defense teams, hackers are armed with the very same ammunition and capabilities, creating a seemingly never-ending arms race.
At the beginning of 2018, the Electronic Frontier Foundation’s “The Malicious Use of Artificial Intelligence” report warned that AI can be exploited by hackers for malicious purposes, including the ability to target entire states and alter society as we know it. The authors of the report contend that globally we are at “a critical moment in the co-evolution of AI and cybersecurity and should proactively prepare for the next wave of attacks.” They point to the alleged attacks by Russian actors in manipulating social media in a highly targeted manner as a current example of this threat.
It’s no surprise that cyber experts are concerned. After all, for hackers, AI presents the ideal tool to enable scale and efficiency. Similar to the way machine learning can be used to monitor network traffic and analyze data for cyber defense, it can also be used to make automated decisions on who, what, and when to attack. There is potential for hackers to use AI in order to alter an organization’s data, as opposed to stealing it outright, causing serious damage to a brand’s reputation, profits, and share price. In fact, cybercriminals are already able to utilize AI to mold personalized phishing attacks by collecting information on targets from social media and other publicly available sources.
Guarding Against the “Weaponization” of AI
To protect against AI-launched attacks, security teams should be mindful of three key steps to cement a strong defense:
Step 1: Understand what AI is protecting.
Identify the specific attacks that are you protecting against and what AI or machine learning technologies you have in place to guard against these attacks. Once teams lay this out clearly, they can implement appropriate solutions for patch management and threat vulnerability management to ensure that important data is encrypted and there is sufficient visibility into the whole environment. It is vital that an option exists to rapidly change course when it comes to defense because the target is always moving.
Step 2: Have clearly defined processes in place.
Organizations that have the best technology in the world are only as effective as the process they model. The key here is to make sure both security teams and the wider organization understand procedures that are in place. It is the responsibility of the security teams to educate employees on cybersecurity best practices.
Step 3: Know exactly what is normal for the security environment.
Having context around attacks is crucial but this is often where companies fail. By possessing a clear understanding of assets and how they communicate will allow organizations to correctly isolate events that aren’t normal and investigate them. Ironically, machine learning is an extremely effective tool for providing this context. To safeguard against the weaponization of AI, organizations must build a robust architecture on which the technology operates and be mindful that the right education internally is key to staying a step ahead of attackers.
Rodney Joffe has been a sought-after cybersecurity expert who, among other notable accomplishments, leads the Conficker Working Group to protect the world from the Conficker worm. Providing guidance and knowledge to organizations from the United States government to the … View Full Bio