Talking at Web Summit, Nicole Eagan, chief executive officer of the AI cybersecurity firm Darktrace, described one of the stranger security breaches she has responded to. Strange, not because of its size or complexity, but because this particular breach was caused by an unsecured internet-of-things fishtank in a casino.

“I couldn't quite figure out why an IoT connected fish tank existed, I guess I don't know enough about fish,” said Eagan. “Supposedly it was to do everything from monitoring the thermostat, to making sure the temperature stayed at the right levels, to monitoring chemical levels in the water, so there were valid reasons.”

While there were (apparently) valid reasons for a casino to have a connected fishtank, Eagan explains that the device was easily breachable, and by accessing the fishtank hackers were able to uncover some of the casino’s more sensitive data.

“You could break into this easily – there was no security on it – [then] laterally move into the casino's network and start searching for the high-roller's database, and that's exactly what this attacker did,” said Eagan.

“Once they located the high-roller database they started trying to move it across the network to upload it to a cloud. Darktrace was able to detect this because 'why is the fishtank searching laterally across the network for a high-roller's database.’”

Hackers have so many ingenious and inventive ways of accessing networks that it can be difficult for humans to keep up, which is why companies like Darktrace are advocating augmenting human security teams with artificial intelligence.

Considering that in mid-2017 there were almost  350,000 open security positions, which is a number that’s set to rise to 1.8 million by 2022, according to the Centre for Cyber Safety and Education, it’s hard to argue that there is any alternative.

How Darktrace developed its AI

When Darktrace began its mission to bring transformative technology to cybersecurity, it found that there weren’t any historical datasets that fit its purpose. According to Eagan, analysing historical attacks would’ve only provided the same results as the technologies that were already failing, so the company set about using unsupervised machine learning to teach its system on the job, allowing it to learn about threats in real time.

“What we did was say ‘what if we could install this machine learning in any network in less than an hour?’ What if we came back a week later and could actually show a company the threats that had been undetected that were already inside their system, and that's exactly what we do with unsupervised machine learning.”

Having done that for five years, and having been deployed in over four-thousand networks, Darktrace has now built up a formidable dataset that can be leveraged by everyone, from those protecting legacy devices to security teams needing to protect new types of technologies, such as IoT-powered fishtanks.

“What we did was say ‘what if we could install this machine learning in any network in less than an hour?”

But in addition to its use of unsupervised machine learning, Darktrace has also found a place for supervised machine learning in its arsenal.

“We use it to actually analyse and watch human threat analysts,” said Eagan. “So we have a team of 80 threat analysts, many of them came from places like British Intelligence, GCHQ and MI5,or US Intelligence, like the CIA, the NSA, the FBI, and what we did is we actually watched how these people at the top of their game investigated and remediated threats inside of networks.

“We took that knowhow and we packaged it into algorithms and added that into the product, so now you have all the advantages of understanding, from the artificial intelligence prospective, how experienced threat analysts can help investigate and respond to these threats.”

Building trust between human security teams and AI

Working with human threat analysts has been good for Darktrace and its AI, but getting external human security teams to work with and trust artificially intelligent cybersecurity products is another problem altogether.

To build trust between AI security software and humans, the software obviously has to make the right decisions when called upon, but humans also have to trust that it is capable of taking the correct course of action, and that only comes with time.

Eagan explains how a short-staffed hedge fund, which didn’t have a dedicated security team and was outsourcing a lot of its IT, had asked Darktrace to deploy its technology in its systems.

The hedge fund wanted Darktrace’s technology, but didn’t want the AI to slow it down or stop it from trading, so Darktrace deployed its AI with a feature called human confirmation. The feature – as its name suggests – allowed humans to look at and confirm actions the AI wanted to take were correct. Depending on the time the attack happened and the complexity, the AI’s recommendations could even be confirmed from a member of the security team’s mobile phone.

“Over time what we found is when the AI keeps taking the right action over and over again, the human trusts it and puts it into what we call active mode,” said Eagan.

“We've learned by deploying this autonomous response, that trust relationship of making sure the AI is making the right choices at the right times and allowing humans to have some input and control of that becomes really important.” 

AI on the attack

While AI has massive potential from a security perspective, it’s a tool that could be used to attack as well as defend.

“There's a new emerging type of attack that we haven't really seen yet, and that is an attack that uses artificial intelligence,” said Eagan.

“We've been talking about how attacks can be stopped with artificial intelligence, but what happens if the attack actually starts using artificial intelligence? Imagine something that can move as quickly as ransomware, but actually uses AI, so it can go inside the network and learn what your defences are and figure out how to work around those.”

“There's a new emerging type of attack that we haven't really seen yet, and that is an attack that uses artificial intelligence”

With attackers also utilising AI, what we are seeing, and are likely to see more of in future, is advanced, sophisticated attacks  continuing to target high-profile institutions, like nation states, spy agencies and the largest corporations on the planet, but also moving to target those lower down the food chain, such as smaller companies or even individuals.

According to Ondřej Vlček, chief technology officer for the security firm Avast, these attacks are enabled by automation and by using AI on the attacking side because of the speed, scalability and accuracy it provides.

“AI is absolutely fabulous to help the security industry to somehow raise the bar in terms of our ability to protect against threats, but what's also happening today is the exact opposite and that is we are seeing more and more cases where the bad guys, the bad actors, are trying to use those same algorithms to actually generate attacks, that is they are using AI to somehow raise the bar in terms of sophistication and complexity of the attacks that they are conducting,” said Vlček.

“I would say security AI is really a double edged sword. It can be used for good and for defensive purposes, where it really allows us to do our jobs much better, but at the same time it's a real threat when it starts to be used by the attackers also.”

Securing using the human body’s tactics

So are both attackers and security teams about to engage in a cyber war with never-before-seen AI? If that is the case then at least those trying to protect and secure have a roadmap to work from, with companies like Darktrace taking inspiration from the way the human body defends itself from attack.

“The good news is that the human body has been fighting attacks for millions of years, so what if we looked into the human body for inspiration. Now the way that the human immune system works is, it's complex, but it has an innate sense of self, it then automatically detects what's not self, and that's exactly how artificial intelligence can help us detect threats,” said Eagan

“By putting artificial intelligence inside the core of a network we can analyse 100% of raw network traffic, and we can automatically figure out what's normal and what's unusual.

“By using very advanced types of mathematical approaches we can connect the dots, we can figure out if there're multiple items going on at the same time that are unusual that may increase the probability that the network is coming under attack.”

Vlček said that his company Avast is blocking three-and-a-half billion attacks in a month, too many for even the most dedicated of security teams to handle alone.

AI can take some of that burden away from human security team, although in the fullness of time the same technology that we are relying on for protection may also need to find a way to defend against its own kind.

Share this article