What does artificial intelligence mean for cyber security? Prof Chris Hankin speaks to the House of Lords Select Committee.

Cyber attacks are considered one of the major threats for national security by the UK government. Artificial intelligence is considered to be a technology with major potential benefit. But what happens when these two worlds combine?

That’s exactly what the House of Lords Select Committee on Artificial Intelligence wanted to know. To find out more, they recently called in Professor Chris Hankin, Co-Director of the Institute for Security Science and Technology, to provide the panel with professional insight.

Below is a cut and edited summary of the evidence session. Some of the questions included have been rephrased. You can watch the full session online here.

 

What does artificial intelligence mean for cyber security today?

When I think about artificial intelligence in the context of cyber security today, I think mainly about machine learning, rather than broad artificial intelligence.

At Imperial, researchers have had success in using machine learning to analyse network traffic, learn what “normal” looks like, and spot anomalous things which might be indicative of a cyber attack.

This sort of approach is also used, for example, by the Darktrace, a UK company.

How successful is this approach?

It is a very exciting technology, and Darktrace has made a great commercial success out of it.

There are still some open research challenges to giving more accurate signals about what is going on, and reducing false positives. This is the focus of academic research across the world.

What might future developments of AI in cyber security look like?

In August 2016, a competition was held in the United States to develop automatic defensive systems that could understand when they were under attack, and then repair themselves and mitigate against the attack. Over, say, a 10 to 15-year horizon, we could be looking at that sort of technology being lifted to the level of systems. People often use the analogy of the human immune system when describing this potential technology.

Will only state-sponsored hackers have the means to deploy AI in cyber attacks, or is there a risk that AI-enabled cyber attacks will be “democratised” in the near future?

As Dr. Mark Briers articulated during his answer in the House of Lords, many of the “democratised” threats we see today probably came from state sponsored efforts some 10 years ago. Earlier this year in fact we saw hacking tools that were developed by the NSA being leaked online by a criminal hacking group. Looking forward 10 years, we might expect AI cyber weapons to follow the same path, from initially being developed by states, to becoming widely available.

This creates an additional problem in attribution; it is becoming much more difficult to differentiate between state actors and organised crime, as the sorts of techniques that those two groups are using to mount cyber attacks are increasingly similar.

Adversarial AI, which aims to disrupt artificial intelligence learning systems, is a current research topic. How much of an issue are recent developments in that field of adversarial AI for the deployment of AI systems in cyber security?

We have been doing some work on using adversarial AI to see how possible it is to train an attacker to evade the state-of-the-art cyber security detection algorithms, called classifiers, of the type we discussed earlier.

We’ve seen that if you can get into the right part of the system, you can learn a lot about what the cyber security classifier might be doing, and introduce noise into your attack to evade detection. The message I take from this is that, at the moment, AI is not the only answer we should be thinking about for defending our systems.

For example, let’s think about the Stuxnet malware that was used to delay the Iranians in their uranium enrichment process. The attack was essentially a physical attack, mounted through cyber, and in one version at least it caused the rotor blades in the enrichment centrifuges to spin at very high speeds.

An AI detector might have been able to detect that attack by looking at some network traffic, or maybe the adversarial AI approach might have evaded detection. Either way, if you had been standing anywhere near the centrifuges you would also have had a physical signal that something was going wrong.

How prepared is the UK for the impact of artificial intelligence on cyber security?

The UK’s NCSC has produced some very good advice for companies, government and private citizens about how to protect themselves. The sorts of attacks that we may be talking about, which are AI-based, will at the moment be probably no different from the sorts of attacks you see from human attackers, and so this advice is still valid.

Advice around cyber hygiene, such as keeping software up to date, having appropriate antivirus software, not sharing passwords with people etc. is very effective in reducing the impact of cyber threats. Unfortunately, the cyber attacks that have been most prominent in the news over the last year—WannaCry, NotPetya, Equifax—have all been the consequence of people running unpatched software, contrary to this advice.

What, in your view, is the single most important policy recommendation?

For the future, it is very important that we recognise that cyber security is a priority within the artificial intelligence area, and that a good number of studentships at all levels are funded to support this linkage between cyber security and AI.

Leave a Reply

Your email address will not be published. Required fields are marked *