Security researchers at IBM Research developed a “highly targeted and evasive” AI-powered malware dubbed DeepLocker and will present today.
Security researchers at IBM Research developed a “highly targeted and evasive” attack tool powered by AI,” dubbed DeepLocker that is able to conceal its malicious intent until it has infected the specific target.
“IBM Research developed DeepLocker to better understand how several existing AI models can be combined with current malware techniques to create a particularly challenging new breed of malware.” reads a blog post published by the experts.
“This class of AI-powered evasive malware conceals its intent until it reaches a specific victim. It unleashes its malicious action as soon as the AI model identifies the target through indicators like facial recognition, geolocation and voice recognition.”
AI-powered malware represents a privileged optional in high-targeted attacks like the ones carried out by nation-state actors.
The malicious code could be concealed in harmful applications and select the target based on various indicators such as voice recognition, facial recognition, geolocation and other system-level features.
“What is unique about DeepLocker is that the use of AI makes the “trigger conditions” to unlock the attack almost impossible to reverse engineer. The malicious payload will only be unlocked if the intended target is reached. It achieves this by using a deep neural network (DNN) AI model.”
The researchers shared a proof of concept by hiding the WannaCry ransomware in a video conferencing app and keeping it stealth until the victim is identified through the facial recognition. Experts pointed out that the target can be identified by matching his face with publicly available photos.
“To demonstrate the implications of DeepLocker’s capabilities, we designed a proof of concept in which we camouflage a well-known ransomware (WannaCry) in a benign video conferencing application so that it remains undetected by malware analysis tools, including antivirus engines and malware sandboxes. As a triggering condition, we trained the AI model to recognize the face of a specific person to unlock the ransomware and execute on the system.”
“Imagine that this video conferencing application is distributed and downloaded by millions of people, which is a plausible scenario nowadays on many public platforms. When launched, the app would surreptitiously feed camera snapshots into the embedded AI model, but otherwise behave normally for all users except the intended target,” the researchers added.
“When the victim sits in front of the computer and uses the application, the camera would feed their face to the app, and the malicious payload will be secretly executed, thanks to the victim’s face, which was the preprogrammed key to unlock it.”