Artificial Intelligence
Applying AI to Cybersecurity: The View from Huawei
Artificial intelligence (AI) is increasingly being used within the cybersecurity space, but some challenges remain. Lucy Ingham hears from Denzel Song, president of Huawai Security Product Domain, about how the technology can be effectively applied to security
In the last few years, AI has increasingly been incorporated into a host of cybersecurity products. Promising the ability to augment the capabilities of both traditional security software and infosecurity experts, it has the potential to assist with the deluge of cyberattacks that organisations face.
“There are more and more cyberattacks, and there are a lot of hidden ways with many variants. Picking ransomware as an example, after digital currencies became more and more popular, after WannaCry appeared in 2017, there were hundreds of thousands of ransomware variants that appeared after that,” said Denzel Song, president of Huawei Security Product Domain, during a talk at Huawei Connect in Shanghai, China.
“If we only use the traditional fingerprint-based tracing and detection to identify ransomware, it is not enough. Therefore, a lot of people proposed to use AI.”
However, despite the fact that Huawei has been working in this space for the best part of a decade, Song is keen to express caution over the application of AI to cybersecurity.
“Of course, AI can really increase the efficiency and accuracy, but there are quite a lot of other challenges and issues if applying AI to cybersecurity.”
AI, cybersecurity and the curse of false positives
AI models start, of course, with data – an area that in cybersecurity runs the risk of user privacy challenges, according to Song. However, even with this challenge overcome, AI is difficult to make work in real-world security environments due to the high risk of false positives.
“Sometimes there are quite a huge number of false positives, because sometimes normal behaviour will also be identified as an attack by AI,” he said, adding that this was something his organisation had been working on.
“Huawei has made extensive research on AI-enabled security in the past several years. Firstly, in single point threat detection, we used AI technology to increase the detection ratio of threats and also increase the accuracy of detection for different kinds of attacks.”
“Sometimes there are quite a huge number of false positives, because sometimes normal behaviour will also be identified as an attack by AI.”
However, this is not a case on one-size-fits-all. Instead, different algorithms are suitable for different applications and situations, and the rate of false positives varies significantly.
“We use different algorithms and the results are quite different. For example, with credential stuffing attacks, if we use random forest, quite a simple algorithm, it is very effective for malicious CNC detection,” he said.
“When we have huge amounts of data generated, supervised machine learning can provide the most promising results: according to our research in the lab, we found that the accurate detection rate is 99%. But some types of attack can only be identified by 70% or 80% and that means a lot of alarming false positives.”
Making AI work for cybersecurity
While false positives are a concern for single algorithm approaches, Song argues that these can be overcome by running two or more algorithms in tandem and correlating the results.
“We can link a single analysis together, for example, with a clustering algorithm or graph computing algorithm. The single threat analysis result can be analysed together [with the algorithm results] and form attack chain,” he said.
“For example, if one host has already been controlled remotely through C&C detection, we cannot say it is 100% controlled by the attackers. But if that host has been attacked and if some data has been stolen, then we can confirm that this host has already been compromised, and we can take very definite actions to fix it.
“Through this kind of correlation detection we can increase the accurate detection rate and reduce the false positives.”
While this helps to combat the issue, it does not work in all cases, particularly when the threat actor has a high level of expertise.
“For some of the advanced hacker organisations, it is too difficult for us to judge their behaviour because they can predict your detection algorithm and they can deliberately circumvent it. For example, they can stay after compromising and not take the next step actions, so it is difficult for us to do the correlation.”
Taking a preventative approach
While this kind of approach with AI can be effective during or in the aftermath of an attack, preventative measures require an “even higher level of AI application”, according to Song.
“For example, building a global-level AI brain to do more preventive defence. For example, recently we launched an AI firewall, which can provide local security capabilities,” he said.
This approach uses a method Song dubs “federated learning”, where individual models are created locally from siloed sets of training data, before being uploaded to the cloud to contribute to a global model.
“Those models can be added to the cloud, so that brain can absorb the model from different parts of the world,” he explained.
“There is quite a long way to go and we need a lot of efforts and contributions from the whole industry.”
“And since it is federated learning, we don't need to upload any user data to the cloud. Only the model trained by user data needs to be uploaded, so that user privacy can be well protected.”
With access to a far wider range of models trained of different data, this approach enables “relatively more preventive defence measures”, however, there is still considerable room for development in making AI an effective cybersecurity tool.
“There is quite a long way to go and we need a lot of efforts and contributions from the whole industry,” he said.
“We not only try to avoid the problems of AI itself, but also bring AI into the cybersecurity domain and use AI to resolve the more severe cyberattacks we're facing.”