Category: Cyber Security

How we can secure critical infrastructure against zero-day hacks

A post by Dr Tingting Li, Research Associate at the Institute for Security Science & Technology.

As detailed in the recent Alex Gibney documentary Zero Days: Nuclear Cyber Sabotage, the Stuxnet worm caused havoc in an Iranian nuclear facility by exploiting unknown – and hence unprotected – weaknesses in the computer control system; so called zero-day weaknesses.

At Imperial ISST we’ve shown that the risk of a cyber-attack like Stuxnet being successful can be reduced by strategically defending the known weaknesses. We can model the relative risks in the system without foreknowledge of potential zero-day weaknesses, and maximise security by focusing defences on higher impact risks.

I’m very grateful to have recently won the CIPRNet Young CRITICS award for this research, which was supported by RITICS with funding from EPSRC and CPNI.

Exploitability of an Industrial Control System

Shown in Figure 1, a typical attack on an industrial control system (ICS) involves a number of steps. Each requires the attacker to exploit a security vulnerability to progress to the next, and each vulnerability can be a zero-day weakness or a known weakness.

cyber-attack on industrial control system

These weaknesses can be attributed an ‘exploitability’ value reflecting the sophistication and required attacking effort; those with higher exploitability likely cause a higher risk to the overall system.

With regard to an acceptable level of risk, we define the tolerance against a zero-day weakness by the minimal required exploitability of the weakness to cause the system risk to exceed the acceptable level.

Modelling attacks

We created a Bayesian Risk Network based on three types of nodes. Complete attack paths are modelled by target and attack nodes, and the damage of successful attacks are evaluated against requirement nodes.

We modelled common types of assets in Industrial Control Systems as four nodes (T1-T4) in the Bayesian Network; a Human-Machine Interface, a Workstation, a Programmable Logic Controller and a Remote Terminal Unit.  We also select five common weaknesses (w1-w5) and five defence controls (c1-c5) from the ICS Top 10 Threats and Countermeasures.

The weaknesses are assigned an exploitability value and are attached to a relevant single attack node between a pair of targets, giving a single attack edge. Each attack node hence becomes a decision-making point for attackers to choose a known or zero-day weakness to proceed. The defence controls reduce the exploitability of a weakness according to their relative effectiveness.

This allows us to model zero-day exploits without knowing details about them, and focus on analysing the risk caused by zero-day exploits.

Four trials were run on the network. In each a zero-day exploit of scaling exploitability (e.g. 20%, 40%, 60% and 80%) is added to each target, and defence controls are individually deployed. The updated risks are then calculated, as shown in the four charts in Figure 2. The upper-curve shows the trend of the risk with no defence control, while the bars show the mitigated risk from each control deployed.

What did we learn?

In a nutshell, that zero-day exploits at earlier steps in the attack chain create greater risk, and deploying defences at these points can significantly reduce this risk.

The zero-day exploit at asset T2 (in this example the work station asset) is the most threatening as it brings the greatest increment to the risk, while asset T4 is the least threatening. This is because T2 influences more subsequent nodes. Without defence controls, a zero-day exploit of 31% exploitability at T2 will reach the critical level. Applying defence control C2 however increases this to 72% exploitability.

In addition to single controls we also investigated the most effective combinations, i.e. defence plans, represented by bit vectors of inclusion/exclusion. Plan 10011 for example indicates application of c1, c4 and c5, and exclusion of c2 and c3.

We looked at the impact of each plan on the maximal risk when the zero-day exploit at each target reaches its maximal exploitability, the risk reduction over different targets, and tolerance. The tolerance value at each target can be viewed as a radar chart as shown in Figure 3.

It’s interesting to see in Figure 3a that deploying more controls does not necessarily guarantee a larger tolerance coverage. Defending against more widespread weaknesses would generally produce more risk reduction across the network. Weaknesses near the attack origin tend to have a greater impact on the risk of all subsequent nodes, and so applying defences against earlier attacks are relatively more effective.

 

Dr. Tingting Li is a Research Associate at the Institute for Security Science & Technology, Imperial College London. She obtained her PhD degree in Artificial Intelligence from University of Bath in 2014. Her research is primarily in cyber security for ICS, logic-based knowledge representation and reasoning, multi-agent systems and agent-based modelling.

Email: tingting.li@imperial.ac.uk

The interaction between safety and security

A post by Professor Chris Hankin, Director ISST

Increasing digitization has led to convergence between IT (Information Technology) used in offices and mobile devices, and OT (Operational Technology) that controls devices used in critical infrastructure and industrial control systems. The IoT (Internet of Things) is also rapidly growing, with around 10 billion devices today.

These trends raise concerns about the interaction between safety and security. The reality of the threat has been highlighted in national news coverage, from cyber security vulnerabilities being exploited to compromise vehicle safety, to denial of service attacks launched from consumer devices.

Discussions are sometimes hampered by the lack of clear definitions of the concepts. Safety is often understood as concerning protection against accidents, whilst security is about protecting systems against the action of malicious actors. But these two definitions miss some essential aspects of the two concepts. A slightly different view is that safety is about protecting the environment from the system and security is about protecting the system from the environment.

Another contrast between the two concepts is how we approach risk assessment. Safety often considers the risk to life and limb and measures risk using actuarial tables. Security more often measures risk through consideration of the threat to information assets – at the moment data breach may be the key concern. As cyber physical systems become more prevalent there must be a convergence between these different approaches.

From a regulatory and standards point of view, the following Venn diagram summarises the current situation:


However, practitioners recognize that there is not a clear separation (indeed it would be undesireable if there was), so the following is a better diagram of the current situation:


New standards are beginning to consider both safety and security.  There is then a question about how large the intersection should be.  There appears to be general agreement that the following diagram is wrong:

There are differences between the two concepts and we have hinted at what those might be. However, some commentators, predominantly from the security sector, have questioned whether a system can be safe if it is insecure.

The examples of compromise to vehicle safety mentioned earlier give some weight to this view – it is clear that physical harm can result from the exploitation of cyber vulnerabilities. So maybe the following diagram is a better representation:

This is not universally accepted – some would argue that insecure components can be deployed in a system without compromising the safety because of the way in which those components are deployed and their effect is constrained.

Of course an alternative diagram would represent the secure systems as a subset of the safe ones – this could be verbalized by a slogan that a system cannot be secure if it is not safe. This is clearly wrong; safety, in the way we have viewed it here, is only really an issue for OT systems but we clearly want our IT systems to be secure.

For the future, we might want to re-think the relationship between safety and security. The UK Cyber Security Strategy 2016-2021, published on 1st November 2016, is based on three strands – Defend, Deter, and Develop – underpinned by international collaboration. The Defend strand talks a lot about “secure by default” systems and this could be an argument for breaking “out of the box”:

I am sure that this is a debate that will continue for some time.

Chris Hankin

Chris Hankin is Director of the Institute for Security Science and Technology and a Professor of Computing Science. His research is in cyber security, data analytics and semantics-based program analysis.

The origin of threat assessment

A post by Helen Greenhough, PhD Research Student, Imperial College, Dept of Computing

As an analyst in the defense sector, the adage of threat = capability x intent was widely accepted.   But where did it come from?

In the course of my research I was pleased to come across what appears to be the original source of this equation in J. David Singer’s 1958 paper ‘Threat Perception and Armament-Tension Dilemma’ and was originally:   ‘Threat-Perception = Estimated Capability x Estimated Intent’ [p94, Singer, J. 1958].   This quasi-formula  posits that the perception of a threat can be reduced to zero by either reducing military capability or military intent.  In the context of Springer’s paper the equation was part of a discussion on a Cold-War disarmament strategy  concluding that weapons, rather than being dismantled or re-purposed, should be transferred to the custody of the UN.   Ultimately the Cold-War threat equation was reduced to zero not by removal of estimated capability but through the fall of the Soviet Union – the removal of intent. While Springers’ suggestion of transferring weapons to the UN did not catch on, his equation did and is still in use today in defense circles as part of Threat Assessment activities. Singer’s equation could be viewed as a form of quantitative risk evaluation, which under some frameworks is represented as: risk rating = probability of risk event x impact of risk event.   It is not entirely clear if Singer was inspired by the field of risk assessment, or even perhaps vice-versa but the two areas do seem to have much overlap, with the concepts of risk and threat being inherently interchangeable.

  1. Singer, Threat-Perception and the Armament-Tension Dilemma, The Journal of Conflict Resolution Vol 2, No 1 Studies on Attitudes and Communications, Mar 1958, pp 90-105, http://www.jstor.org/stable/172848

 

Security of Industrial Control Systems

A post by Professor Chris Hankin, Director ISST

Operational Technology (OT), as distinct from Information Technology (IT), refers to the hardware and software that controls an industrial process.  Despite increasing similarities between OT and IT architectures and components there are quite fundamental differences in the make-up of cyber attacks on each.  In To Kill a Centrifuge, an in-depth technical analysis of the Stuxnet attack, Ralph Langner has already identified three distinct layers of a sophisticated cyber-physical attack: the IT, the Industrial Control Systems (ICS) and the physical layers.  The SANS Institute in the U.S. has recently published an anatomy of cyber attacks  on ICS, involving two multi-phase stages: 1) cyber intrusion preparation and execution – what can be thought of as intelligence gathering; and 2) ICS attack development and execution.

Since it is generally the physical damage that grabs headlines, and there hasn’t been much news about attacks on ICS, one must assume that a significant proportion of the incidents reported to ICS-Cert each year (roughly 250) are intelligence gathering operations.  The recent attack on the Ukrainian power grid may have added a third, post-attack stage – a distributed denial of service (DDoS) attack on the energy company to prevent reporting of outages and slow down the restoration of power.

Against this backdrop, the UK government sponsored Research Institute in Trustworthy ICS  (RITICS) is addressing three key questions:

  1. Can we develop frameworks for assessing the physical harm that might arise from cyber attacks?
  2. Can we better communicate risk that arises from cyber threats?
  3. Can we develop new defensive measures?

RITICS is hosted at Imperial College London and is a partnership of 5 universities: Imperial, Queen’s University Belfast, the University of Birmingham, Lancaster University and City University London.

 

We are approaching Question 1 with use-cases from transport and energy; Question 2 with use-cases from transport, energy and water; and Question 3 with use-cases from energy.  It is still early days in our work, but we hope to offer new insights and techniques to ICS providers, owners and operators – and we are open to new industrial partners.

RITICS Generic Architecture

The Cyber Security Show

A post by Professor Chris Hankin, Director ISST

I’ve just returned from the Cyber Security Show 2016, held 8-9 March 2016 at the Business Design Centre, Islington. This incorporated an exhibition and conference, one of the major annual cyber security conferences in the UK, for which I was Chairman for the two days.

shutterstock_244931722 - smallIt is a particularly interesting time in the world of Cyber Security.  Just a month ago, President Obama launched the U.S. Cybersecurity National Action Plan.  The measures announced include the creation of a Commission on Enhancing National Cybersecurity, a $3.1bn Information Technology Modernization Fund, a new National Cybersecurity Awareness Campaign to empower Americans to better secure their online accounts, and a $19bn investment in cyber during the 2017 Fiscal year.  A significant amount of the detail in the announcement concerned the protection of Critical National Infrastructure (CNI).  This announcement echoed our own Chancellor of the Exchequer’s speech in Cheltenham last autumn in which he committed £1.9bn to the renewal of the UK’s National Cyber Security Programme.  Highlights in the UK plan include better coordination of security efforts through a National Cyber Centre, the creation of an Institute of Coding to address the skills shortage, and significant investment in supporting innovation.  The threat to the UK’s CNI also featured prominently in his speech.

The Cyber Security Show reflected these concerns about the threat to CNI and the skills shortage. Key themes which recurred in a number of conference presentations concerned the mechanisms for ensuring better collaboration between Government, industry and academia, and the need for more information sharing.  Another recurring theme was the difficulty of attributing cyber attacks.  Like many others I went to the show certain that the December 2015 attack on the Ukrainian power grid was a long term attack based on the BlackEnergy trojan, but the jury is now out and it seems that the attack might have just been the opportunistic exploitation of poor cyber hygiene.

The Cyber Security Show, as with all such events, gave me the opportunity to catch up with old friends as well as making new contacts, both at Government level (UK, Estonia, Italy and NATO to name a few) and industry.  I hope that some of these will lead to new collaborations for the Institute and I will keep you posted.