Cylance ‘s AI-based antivirus software can be broken down into multiple groups so attackers can bypass the system ‘s machine learning algorithm, and suspicious code can be inserted from a file previously marked safe.
Security specialists from the Australian organization Skylight Cyber stated in their report that they had found a way to use the hoodwink machine learning algorithm and were inserting code from a benign file that had previously been marked safe.
Artificial intelligence has been used for defense in cybersecurity. The business model alone revived the state of the BlackBerry, its Cylinder system discovered malicious files before they were created. It turned out that the safe system also has its weaknesses, the researchers note.
Analyzing the engine and model of Cylance ‘s AI antivirus product, we noticed a bias towards a particular game. By combining analysis of the process of extracting functions, its strong dependence on strings and its strong bias for this particular game, we are able to create a simple and rather amusing bypass, “the study report says.
“By selecting a list of lines to a malicious file, we can change the score to avoid detection. This method proved successful for 100 percent of the top 10 malware as of May 2019, and close to 90 percent of malware, “the experts added.
The malware that the researchers managed to insert include WannaCry and Samsam ransomware.
The report came 7 days after the BlackBerry launched the CylateGUARD, an update in the AI system. Cylinder confirmed that this problem is also present in the Cylinder PROTECT version.
“BlackBerry Cylinder is aware that the bypass has been uncovered by security professionals. We have checked there is a problem with Cylinder PROTECT, which can be easily used to bypass the anti-virus program, “- the management of the company reported.
Research and development experts have found a solution and will release an update automatically for all customers working with current versions in the coming days. More detailed information will be provided as soon as it is available, experts reported.
Security professionals understand that next-generation security systems can be easily deceived, according to Venafi President Kevin B. “This study should serve as a reminder to security groups that cyber criminals have the ability and desire to evade the use of next-generation AV tools. We should all expect similar vulnerabilities to emerge in the future. “
“Ultimately, AI is not a silver bullet, it’s just the latest attempt to do the impossible – predict the future,” said Gregory Webb, CEO at Bromium.
We trust such systems too much to know what is good and bad, we put ourselves at great risk – which, can create huge safety blind spots, as is happening now. “