Category: Uncategorized

Quaisr: Digital twins for the machine-learning age

Omar Matar, CEO Quaisr, Vice Dean of Engineering and Professor of Multiphase Fluid Dynamics at Imperial College London.

Professor Omar Matar

Digital twins have been around for some time (in fact, the term ‘Digital Twin’ was coined in 2003 by Michael Grieves of Florida Institute of Technology) and are used in a variety of sectors, from manufacturing to energy to consumer goods. One way to define a digital twin, inspired by Arup, is a combination of computational models and a real-world system capable of monitoring, controlling, and optimising its functionality, of developing capacities for autonomy, and of learning from and reasoning about its environment through data and feedback, both simulated and real.

The ongoing trend towards digitalisation of just about every sector is driving massive acceleration in digital-twin adoption as new data streams become available to organisations. The digital-twin market is reportedly experiencing 100% growth year-on-year with some projecting that it will reach $48 billion in 2026, from just $3 billion in 2020.

Despite this growth, the full potential of digital twins has yet to be realised. With colleagues from Imperial College London and the Alan Turing Institute, I co-founded Quaisr to bring digital twins into the machine-learning age, allowing enhanced capabilities with applications spanning environmental monitoring to improving infrastructure resilience.

But what exactly are digital twins? You can think of a digital twin as a digital replica of a physical asset. An asset is instrumented with a variety of sensors which collect and feed information back to the individual or system controlling the asset, who can then action interventions based on this information.

In the most basic sense, digital twins work to integrate various sources of digital information about the asset and its environment to allow more efficient and predicative modifications and iterations on the physical world.

If you think about the baseline of how this happens, the data-integration and iteration journey revolve around the operator. For example, as shown graphically in the ‘Baseline’ image below, R&D teams that run experiments, simulations or other innovation projects feed data to a human operator, who then runs small-scale tests followed by production or pilot-scale testing. At each of these two stages (bench, production/pilot), there is manual feedback of asset data back to the operator to allow for iterative improvements.

 

Using existing technology, we can go one step beyond this as shown in the ‘Iteration’ image. Here, the operator stays in the loop but with a semi-automated feedback process. For example, you might have experiments automatically feeding data into a digital recommendation engine at one end, and the production scale tests doing the same at the other end, but with an operator in between deciding on whether or not to act on the intelligence provided.

But now imagine a situation where you have an operator interacting and collaborating with a smart machine, as shown in the ‘Vision’ image. This is what we are really driving towards with our approach to digital twins.

Here you have information from experiments, simulations and algorithms driving the recommendation engine, together with the information coming back from the production testing. Between these two sources of information in the loop you have a robotic operator or smart machine actioning the suggestions, collaborating closely with the human operator. This opens up many new opportunities in not just optimising physical assets but also understanding how they might behave in a given situation. If basic IoT streams allow you to understand the health of the asset, the ‘what now’, and the addition of machine learning unlocks future projections through ‘what next’ questions, Quaisr digital twins enable the ‘what if?’ type question. For example, you might want to know how your asset will behave if it is pushed outside of its comfort zone into a completely new operating space: to understand if it will be safe, secure and resilient.

The types of challenge that our digital twin components can address are broad; from helping to solve challenges such as environmental contamination detection, to production-line decision automation, to optimisation of offshore wind farm locations. We have projects completed, in progress or starting soon with major companies at the operational level.

Quaisr provides modular components for building digital twins, backed by a managed service. We help customers to create their own digital twins using in-house domain knowledge, reducing the problems of adapting and commissioning generic commercial-off-the-shelf (COTS) alternatives. Quaisr components empower citizen developers to build production tooling using internal company data streams and existing cloud-provider resources.

Our approach accelerates the journey to digitalisation by providing insight for design and prototyping, by bridging the gap between data and actionable intelligence to empower decision making.   Our approach also unlocks collaboration via a cross-team digitalisation standard. Importantly we prioritise interoperability with existing technologies in a company’s ecosystem, allowing the integration of often siloed legacy data with newer data streams and machine learning.

Digital twinning is a developing field, and so whilst there are several existing companies with capabilities in one or more of the underlying aspects such as data streaming or simulation, Quaisr is unique in taking a digital-twin-first approach, combining all of the elements to make the infrastructure for creating digital-twins.

With the rapid speed at which global digitalisation is taking place, and with the rapid improvements and deployment of machine learning and simulation, the next phase of bringing these factors together to realise increased capabilities and efficiencies will depend on digital twins. Quaisr is at the forefront of this revolution.

For more information on how we work with companies, please get in touch at omar@quaisr.io.

The role of industry in security and defence innovation

Adrian Holt is Head of Defence at Capita Consulting and a mentor to a range of start-ups. He is a retired Royal Air Force Officer having served for more than 24 years.

Capita is a consulting, digital services and software business, delivering innovative solutions and simplifying the connections between businesses and customers, governments and citizens. The company is one of our industry partners in the ISST Innovation Ecosystem.

 

The role of industry in the broad security and defence innovation ecosystem is a subject close to my heart. Throughout my career I’ve seen the good, the bad and the ugly when it comes to defence procurement, and I know that we can, and we must, do better.

The fact that over the past 10-15 years, something like a 25-year technology advantage in some areas had been squandered, suggests that the ugly has sometimes outweighed the good. I’m hoping that by encouraging this discussion as a community, we can identify the problems and find solutions.

So, what is the ‘bad’ or ‘ugly’, and what is the ‘ecosystem’?

I define the ecosystem as the sum of efforts of government, industry and start-ups contributing to defence and security, with academia playing an essential role feeding directly in to all three of these.

During my career in the RAF I was in the position of using, supporting or buying platforms, and as Stephen Covey said, “We see the world not as it is, but as we are”. So what does bad look like from my perspective? There are three points which I think were far from ideal:

  1. Highly complex and detailed project and capability plans which quickly become unmanageable.

Plans are of course essential, but in my career I have seen many become too complex, unmanageable, and unrealistic. This causes those in charge of delivering the plans to spend all of their time planning and not enough in delivering, so milestones inevitably get missed. I personally spent 18 months of a tour managing a plan that I knew was never going to get executed simply as it was too complicated and continually slipped because everyone was moving around trying to manage the plan rather than execute.

When the overly complex plans fail, we then end up in a situation where everyone is trying to hold everyone else accountable, whether that’s industry holding government to account or vice versa. It gets incredibly difficult adversarial and expensive to change anything, and we end up stuck with capability plans which are so long and enduring that the solutions are obsolete before even being delivered.

  1. Competition rules and commercial processes which allow the big players to catch-up at the expense of first mover innovators.

The glacial pace of procurement in defence means that a start-up that has an innovation on the shelf can be overtaken by industry primes in the time it takes to complete the commercial cycle. The primes can use the time to create an offer from scratch. Then their size, structure and experience allows them to navigate the bureaucracy and finance systems of the customer more efficiently, whereas start-ups often just don’t have the capacity for this.

  1. Barriers to design thinking.

Whilst this is not as obvious as the first two problems, this one I think is very important. Sometimes, industry tries to keep itself in between the start-up solution provider and the government client, probably to keep an eye on potential profit opportunities or increases in scope. I have personally seen this leave one start-up badly damaged when they were unable to fix delivery client problems due to them being contractually bound to only conduct work dictated by the prime.  This reduced quality, increased cost reduced trust and irrevocably tarnished the start-up’s reputation.

A more beneficial relationship would be for the start-up and the end user to be working closely together at the front line, where the problem exists. It would be a much healthier relationship and would have benefited all parties in the long term.

Erosion of trust

Where these problems occur, they seem to arise from an erosion of trust which then necessitates bureaucratic and adversarial conditions in the complex and expensive procurement process. This isn’t the optimum situation for any party, but I’d argue that the start-ups and the people on the front line pay the largest price.

So what can we do about this?

I have seen some great examples of how we can overcome these challenges from my experience working with the jHub which I helped establish in late 2017.

The jHUb was the initiative of General Sir Christopher Deverell, created to help the then Joint Forces Command become more ambidextrous, that is able to explore new opportunities whilst simultaneously benefiting from the investments it had already made.

It was a radical departure from what I’d seen in a nearly quarter of a century through my career.  A key objective of this was to become a better partner for start-ups. The idea was to help them understand how to work with government, to help government understand them and to help both parties over some of the hurdles.

jHUbs value proposition is to connect world-class technology and talent to users in the defence sector. They do this by funding and accelerating pilots, with successful ideas getting access to an innovation committee who can make strategic investment decisions. It reduces a lot of the burden from early-stage innovators.

But beyond this we tried to build a system to actively encourage early-stage businesses to work with Defence, by showing that we’d not only reduced the burden, but that we also understood the problems they’d face.

We didn’t get everything right but we did make a difference. Through a network of partners we managed to deliver 18 projects in our first 18 months, which was unheard of in traditional defence timelines.

New challenges and solutions

That being said, I still lost a number of deals with promising start-ups because they simply weren’t able to engage with government traditional procurement processes which are paperwork heavy, long and drawn out. New challenges emerged like how to help them over regulatory hurdles such as explosives handling, helping them through security clearance and getting them access to X listed facilities. Business support was also important; many start-ups don’t have the business experience to get themselves into shape to do business with entities of the scale and complexity of government.

So how do we fix the remainder? I personally think industry is front and centre in this.

First, we need to help start-ups navigate the Valley of Death by providing services that they find genuinely useful, such as business support, or resources to help them scale for when they do land a contract with government. We can provide them facilities or spaces in the right places, such as access to X-listed facilities if needed. We can help them in getting security clearance so that they can expand the types of contracts they can accept. We can provide them with connections to help them make more compelling offers on tenders.

We’re all in this together

Most importantly, we need to see that our fortunes are inextricably linked and recognise this in the agreements we make.  We need to move towards mutually beneficial contracts between government, big business and start-ups and aim for win-win outcomes. It is no good if one party signs a contract which is financially beneficial for them in the short term at everyone else’s cost. It simply adds to the erosion of trust.

We also need to take more risk against the contracts we sign and be more agile in their delivery. We need to avoid adversarial adherence to unrealistic schedules.

We can consider leaving holes in our contracts so that either SMEs or Forces personnel can move into those spaces and they’re not so reliant on Primes. And if necessary, we should sometimes look to take the hit on margins to enable a deal to get done.

My hypothesis is that if we do all these things then we will gain mutual and collective benefit through the return business we get, and from the trust we’ve developed with the government and with start-ups.

Finally, we need to aim for genuine collaboration. I can’t think of a better example of that than the ISST Innovation Ecosystem. In a fast-moving world, it will be exceedingly rare to find the answer inside the room, and so new spaces and forums are needed. Even when we own the technology our clients need, we might be better off partnering with others to deliver it in the most beneficial way. And it’s through organisations like the ISST that we can enable this to happen.

Expert panel shines light on key space safety and security issues

The Institute for Security Science and Technology recently ran an online briefing event with Imperial SpaceLab and ISPL around space safety and security.

The discussion highlighted the complexity of issues around space commercialisation and governance, and touched on international relations, politics and science.

We posed a few questions to three members of the expert panel to share their thoughts on some of the main discussion threads which came up.

The panel included Dr Jonathan Eastwood (Imperial College London), Nick Howes (BMT) and Rich Laing (Nato Communications and Information Agency).

 

Dr Jonathan Eastwood, Senior Lecturer and Director of Imperial SpaceLab

“The area of space safety and security cuts across an enormous variety of sectors and interests. There is a real need for everyone to work together, and to bring different entities and institutions to the table so that the best solutions can be found.”

 

How important do you see national risk registers in driving policy around space safety and security?

The National Risk Register plays a really important role in crystallising understanding of different potential threats, and providing a central statement of the need to address them. In the case of space weather for example, its introduction into the Register was of key importance because it galvanised a number of separate communities to come together to address the problem. As a result, the UK is arguably world-leading in a number of areas relating to space weather preparedness, and is much more joined up (particularly between academia, industry and government) than it otherwise would have been.

What do you think the key developing issues policy makers should have in their minds regarding space safety and security, for informing their jobs?

On one level, I think it’s very important that policy in this area is evidence-based, and also scientifically based: operations in space are subject to the laws of physics! This means a good understanding of the physical environment, its properties, and then how human and robotic activities are affected are all crucial. At a second level, from my own research area it’s important to recognise that space isn’t ‘empty’, and that there are all sorts of effects – space weather – that can affect our modern technological society both in space and on the ground.

What do you see as the role of academia in helping to develop the UK’s space safety and security capabilities?

The area of space safety and security cuts across an enormous variety of sectors and interests. There is a real need for everyone to work together, and to bring different entities and institutions to the table so that the best solutions can be found. I hope that the academic sector can facilitate this, particularly in providing objective, evidence-based input to the formulation of space policy and law. Academia also has a key role to play in helping policy makers, who may not have a technical background, to understand these issues.

 

Nick Howes, Lead R&D Space Systems, BMT

“The threat to our defence and critical national and international infrastructure from a Kessler scale event, cannot be overstated.”

 

Are we doomed to repeat the same dynamics in international governance of space as we have with land and s

The key issue is that the mega constellations appear to be launching with almost impunity. Licences from the FCC being almost granted like water. The threat to our defence and critical national and international infrastructure from a Kessler scale event, cannot be overstated. Therefore, it appears we are, and the United Nations really need to step in, before it is too late.

How do mega-constellations impact planetary defence surveys and other issues of global collaboration?

Wide field telescope surveys from the like of LSST and the large binocular survey rely on their ability using automated data reduction pipelines to observe and track comets and asteroids for both science, and as potential threats. Putting upwards of 50,000 satellites in the way, even at magnitude 8 (these scopes can hit magnitude 24 easily), will make that job more difficult. The impact on radio astronomy will be nothing short of catastrophic

Are there any specific issues of space related safety and security regarding Brexit?

The major issue with Brexit is the isolation of the UK, and the brain drain in academia and science we are already seeing.

What do you see as the role of academia and industry in helping to develop the UK’s space safety/security capabilities?

Academia and industry need to ensure that the future for scientific exploration is there. Teams like the SSLC have been attempting to assist and inform government with respect to the regulations. We can only hope they listen

 

Richard Laing, Senior Scientist, Nato Communications and Information Agency

“Current structures for governance are predominantly based upon Westphalian concepts of state, and would need to adapt to embrace multinationals and commercial entities [in space].”

 

Are we doomed to repeat the same dynamics in international governance of space as we have with land and sea?

The potential of space means that nations, multinationals and the commercial sector have a keen interest in operating within the global commons of space.  As these interested parties have self-interests that are inevitably going to conflict, the need for accepted norms of behaviour is key, and establishing a form of governance will have to be closely associated.  Agreeing the behaviours for a “responsible actor” in space, will also inevitably need some form of mechanism for cautioning and “punishing” those who break those behaviours.

Current structures for governance are predominantly based upon Westphalian concepts of state, and would need to adapt to embrace multinationals and commercial entities.  Without an effective method for establishing norms of behaviour and governing activity, the first mover advantage will lie with those prepared to take the highest risk (physical, political, or reputational) at the expense of other actors.

What do you think the key developing issues policy makers should have in their minds regarding space safety and  security, for informing their jobs?

An effective understanding of threat needs to embrace threats from all angles; natural, nefarious and accidental. To achieve this understanding of the environment, the need to share data and work in collaboration with other actors is key, linking to the previous point on the need for a structure to establish behaviours and offer a communication forum.

The potential for accidental or irresponsible actions to be misconstrued as nefarious could lead to strategic repercussions; safety and security in space is based on an understanding of the operational environment, and the motivations of other space actors.  Grey Zone activity, that level of conflict that exists between war and peace and has become increasingly predominant, will inevitably reach into space; understanding where and when this may happen is vital for attributing blame for safety/security events.

Introducing BMT – the latest partner to join the ISST Innovation Ecosystem

Earlier this year we welcomed BMT as the latest industry partner in the ISST Innovation Ecosystem.

BMT is an international design, engineering, technology and risk management consultancy . With a broad and deep range of expertise, BMT operates across multiple markets including Shipping, Defence, Security, Environment and Infrastructure.

Max Swinscow-Hall recently caught up with Alan Hodgson – Security, Policing and Justice Lead at BMT – to learn more about what they offer and how they are planning to get involved with the ecosystem.

 

What is BMT’s mission and how do you achieve this?

Our mission is to provide clarity from complexity; working with our customers to turn their complex projects into clear thinking and groundbreaking solutions.

We achieve this by providing independent technical expertise and consultancy. We offer Defence and Security Acquisition and Customer Friend support, Maritime Design and Consultancy, Asset Monitoring and Sustainment, Environmental and Climate Solutions to our global customer base.

Our specific areas of expertise within Security and Technology include:

  • Agile Software Design and Delivery
  • Advanced Data Analytics and Insight
  • Artificial Intelligence and Machine Learning
  • Cyber Security Services
  • Digital and Business Transformation
  • Change Management
  • Strategic Delivery Partnerships

 

Who does BMT typically work with and how can academia engage with BMT

BMT works with a range of defence customers — such as the UK Ministry of Defence, Dstl, Defence Digital, the Front Line Commands and industry suppliers  — and security customers including the Metropolitan Police Service, Counter Terrorism Policing HQ, National Crime Agency, Home Office (including the Accelerated Capability Environment) and the Ministry of Justice.

BMT has a strong and diverse academic engagement portfolio across the UK. We do not limit ourselves to a small number of institutions but instead strive to create networks across academia linking together the best individuals and centres of excellence. This is one of the reasons we were so keen to join the Imperial Institute for Security Science and Technology; their expertise in both defence and security is something that sets them apart.

We have found recently that our defence and security customers increasingly value the academic viewpoint and perspective when completing exploratory or innovative projects. We are always keen to work with academia both on structured course-based research programmes and through collaborative project work.

 

What projects have you been involved with recently?

We’re proud to have provided the Metropolitan Police Service (MPS) with essential management consultancy services since 2016; we’ve become the Met’s trusted advisor for their transformation portfolio, a £1bn capital investment that delivers £450m-£650m in operational savings each year to ensure that the Met continues to be the world’s leading police force and that London is the safest global city.

An important example of these efforts is “One Met Model 2020” – a major initiative to equip officers with the skills, tools and approach to police London effectively in the digital age. It’s a substantial, long-term change designed to introduce more efficient ways of working to deliver a better service to the public. However, the Met faced a challenge in that they lacked sufficiently qualified and experienced managers to run the necessary programmes and projects.

In order to help them meet that challenge, we provided Programme Managers, Project Managers and Senior Project Managers operating at senior levels of the organisation to deliver projects. Our consultants also worked closely with existing MPS programme managers to coach and train them, improving their capabilities and ensuring consistent standards across the service.

By developing and delivering business cases, introducing project artefacts to improve management, leading the delivery of effective dependency management and providing much-needed leadership to complex projects and programmes, our work has been instrumental in helping the MPS to successfully deliver this major transformation.

We have successfully worked on other novel and complex change programmes across multiple domains. As a business we have benefitted hugely from experience delivering large scale, complex security programmes and we have supported a range of Security, Intelligence and Law Enforcement Agencies.

 

How have you seen COVID-19 impacting the security and defence?

Even before COVID-19, the defence, security and policing landscape was complex but also constantly changing with budgets being stretched. All services are having to become more resilient, responsive and agile. An increasing dependency on data and digital technologies is at the heart of ongoing reform and modernisation. COVID has helped to accelerate this change but also poses a range of future budgetary risks and uncertainties.

Like many other government departments, the defence and security sector is becoming  more effective and more efficient by transforming support services, reducing estate costs, reducing management overheads, increasing frontline productivity, replacing core systems, removing unnecessary bureaucracy and developing new capabilities to meet emerging demands.

Our recent Focus magazine on the topic of Digital Transformation goes into more detail about these changes and what organisations can do, in practice, to respond to this – turning the buzz word of “Digital Transformation” into actionable steps as part of a 10 step process.

 

As we move towards an increased reliance on digital platforms, a range of broader cyber security risks are also raised. How is BMT working to reduce this cyber security threat?

We recognise the vital importance of cyber security, especially in the current COVID-19 world where we have seen a rise the number and frequency of attacks. To help our customers counter this threat and mitigate their risks, we have a dedicated cyber security consulting team here at BMT.

Our blended team of cyber security experts, data scientists and software developers help organisations reduce risks and improve resilience by developing their capacity to identify threats, protect assets and detect cyber events.

  1. Identify – We work with our customers to help them identify security and information risks. We offer business–driven advice on how they can best manage and mitigate these risks throughout the project life cycle.
  2. Protect – We help our customers protect their critical assets by transforming their cyber security. Our consulting services and cyber solutions reduce the impact of possible cyber security threats by using best practices for data protection and security.
  3. Detect – Our cyber security consultants apply their expertise in machine learning, data science, data modelling and analytics to help our clients detect known and unknown cyber events.

The cyber threat is constantly evolving and maturing and therefore so must the security solutions. We are always looking for new ways to tackle current and future cyber threats and so are keen to work with academia to explore potential solutions.

 

What attracted you to joining the Ecosystem and how do you see BMT playing a part?

The ecosystem has already got a brilliant reputation for bringing together the golden triangle of industry, academia and government in one forum to talk about disruptive technologies and turn ideas for change into reality.

We are very keen to play an active role in the ecosystem and hope that we will have a chance to contribute thought leadership and work together in a range of joint research projects. We are especially excited to hear about the ISST-organised conferences and hope to use this as a chance to talk about our internal research programmes. We are also hopeful that through membership of the ecosystem it will bring us closer to the students at Imperial College London, using this engagement as part of our continuous search for new talent.

Jo Symons from DASA talks about innovation and DASA’s successes

 

The UK’s Defence and Security Accelerator (DASA) opened a hub in the I-HUB building on Imperial’s White City campus back in July of 2018.  Since then, the presence of security and defence organisations at the I-HUB has continued to grow with the expanding ISST Innovation Ecosystem.

ISST Special Projects Manager, Zarine Khurshid, caught up with Jo Symons who joined DASA early 2020, to give Jo the opportunity to introduce herself and talk more about innovation in security and defence, and DASA’s successes to date.

 

What led you to join DASA?

I joined DASA at the end of February but it feels as if my career to date has been leading to this role.

I joined the civil service straight after completing my degree in Manufacturing Engineering at Birmingham University and one of my first roles was as a technical assessor for a Government scheme which gave small companies grants for research and technology projects.

That whetted my appetite for supporting innovators, which has continued throughout my career. My fascination for how innovators and small and medium-sized enterprises (SMEs) grow continued, and I was very lucky to be able to undertake an Executive MBA at Imperial College London.

My MBA project investigated the differences between angel finance in Cambridge and Silicon Valley just before the dot.com bubble burst. I went on to lead a review which resulted in the creation of what is now Innovate UK. Working with the Business Department’s Chief Entrepreneurial Adviser we undertook a review of entrepreneurship in the UK and alongside this I researched the rapidly developing accelerator and incubator market in this country.

I am so pleased to be working at DASA – an exciting, young organisation that is trying to do things differently in very challenging and important sectors.  As a former student I’m delighted I’ll be able to support Imperial College London’s work to develop the Defence and Security Cluster at the White City campus.

 

What is your role at DASA?

My role is Head of the Partnerships and Impact Team; the aim is to increase the number of innovations funded through DASA that progress through the development journey and turn them into capability that is in the hands of the Armed Forces and security services. We recognise that developing the technology is only one part of progressing ideas through to delivery and that an SME needs a range of support to enable it to grow and secure customers. We do this by:

  • working in partnership with the Armed Forces and other Government agencies to understand their problems so we can tailor our competitions to solve them and encourage the customer to pull through the ideas
  • acting as a smart broker to provide access to mentoring and finance support for SMEs
  • working in partnership with Imperial College to support the development of the Defence and Security Cluster
  • facilitating interaction between SMEs and large industry to encourage collaboration.

 

What is DASA’s role?

DASA’s role is to find and fund exploitable innovation to support UK defence and security quickly and effectively, and support UK prosperity. We have a team of Innovation partners located geographically who find innovators, entrepreneurs and people with ideas. We are interested in any science, technology or service at any stage, from anywhere and anyone. Our competitive process brings experts together to rigorously assess these ideas so we fund only the best and help these ideas accelerate their development. Once funded my team helps the idea progress further towards adoption and integration.

 

Why is innovation so important in security and defence?

It is not just important for defence and security to innovate, it’s absolutely crucial for our national security. The threats the UK faces at home and abroad from our adversaries have intensified in scale, diversity and complexity.

We’re witnessing the resurgence of state-based threats and increasing competition, the undermining and destabilising of the international rules based order, the rise in cyber-attacks, and the wider impact of technological developments which are enabling non-state actors such as terrorist and organised crime groups to have capabilities that previously only states had.

To counter these, we have to retain our strategic and technological advantage. The only way we can achieve that is to be innovative.

What role should government have in the security/defence innovation pathway?

The challenge is to stay ahead of our adversaries and increase our agility to adapt and evolve in the face of evolving threats at unprecedented speed. The Government has a fundamental role in using every lever we have as a nation to achieve this – bringing to bear the widest possible range of capabilities, including defence, diplomatic, economic and so on.

DASA is playing a pivotal role in fast-tracking great ideas and innovative solutions from the private and public sector to some of our most pressing defence and national security challenges.

 

What are the standout successes from DASA to date?

A great example of how DASA is delivering innovation quickly is the crowd safety app, The Krowd. After the Manchester Arena and London Bridge terror attacks in 2017, DASA launched a competition funded by the Home Office to fund crowd safety technology.

The Krowd was one of the projects that received funding and it allows the public to speak directly with security teams at venues, stadiums, transport hubs and shopping centres if they spot suspicious activity.

The app is already in use at the Broadgate Quarter in London and the Exeter Guildhall Shopping Centre.

Other standout successes include our work with the Defence Science and Technology Laboratory (Dstl) to accelerate the development of autonomous ground resupply vehicles for the Armed Forces. Our early work has led to two contracts worth around £5m for the companies we worked with to trial their innovations with the British Army – due to start this year.

And our Access to Mentoring and Finance service has really taken off in the past year, hosting an Investment Showcase and attracting a wide range of investors which in turn has sparked relationships between the investor community and some of our most promising SMEs. Off the back of some of the training provided by DASA, Kinsetsu, a Northern Ireland-based SME, has grown its turnover to an impressive £1.25m and are now much better prepared to start their journey to investment in 2021.  The company specialise in innovative tracking software that can account for and track personnel or assets – giving a real-time view of missing people or items.

After DASA funding of £125,000 it has been trialled with Royal Navy on HMS Bristol. This work has led to a commercialisation opportunity with the Royal Navy contracting the company to conduct further trials on HMS Prince of Wales aircraft carrier to start this year. The devices have multiple security uses such as energy and nuclear sites, prisons, hospitals and blue light services, potentially saving thousands of human hours on paper-based manual checks with improved accuracy. The company have successfully commercialised the innovation with the NHS, and gone onto successfully win new Government funding.

The firm’s managing director Jackie Crooks says: “DASA has been invaluable in raising our profile and enabling access to the defence and security sector which we could never have achieved on our own. Mentoring underlined the importance of continuing to innovate, even in an economic downturn, and in May 2020 we were delighted to receive £50,000 from the UK Government’s Fast Start Competition to deliver contact tracing to protect community care teams supporting the elderly and vulnerable against Covid-19.”

 

How does White City fit into DASA’s objectives?

Innovation flourishes through collaboration and a wide range of voices and perspectives. The Defence and Security Cluster at White City creates the conditions for successful collaboration because it brings together industry, SMEs, academia and Government and enables opportunities to be accelerated through partnership working. It will foster a culture of innovation delivery at pace across defence and national security.

Having a physical presence at White City enables DASA to be accessible and part of the developing innovation ecosystem there.

 

What do you hope to build with the White City ecosystem?

A thriving environment that creates the conditions for collaboration between SMES, large industry, government and academia. Personally, I would like to see some of the SMEs that DASA has funded grow and secure customers and partners for their onward journey. Ultimately, we would like to see new innovations become adopted and integrated into use with our Armed Forces and security services and agencies.

 

Sailing into the Coronavirus Storm Together. A captain’s advice for the rough seas ahead.

This article was originally published online by the U.S. Naval Institute, and featured in the March 2020 issue of their journal Proceedings.

Captain Brasseur has over 20 years leadership experience in the U.S. Navy, including command of the USS Whirlwind (PC-11) at the age of 30 in the Arabian Gulf in support of Operation Iraqi Freedom. He is currently serving in the Armament Cooperation Directorate at the U.S. Mission to NATO. The views presented here are his own, and not that of the U.S. Navy or U.S. Mission to NATO.

The ISST is collaborating with NATO and the NATO MUSIC^2 programme via the White City Ecosystem.

By Captain Michael D. Brasseur, U.S. Navy

The coronavirus is causing death, panic, and chaos the world over. It will likely get worse before it gets better, but it is a temporary condition. In the end, we will defeat this devastating virus. As a naval officer and former captain of a warship, I have learned a lot about how people in difficult situations, facing uncertainty, can overcome significant challenges.

In many ways, a warship is a floating society, complete with all the human drama that comes with combining men and women from all over the country, sending them to sea and charging them to work together to accomplish the mission on behalf of their nation. While there is certainly no comparison between the scale of commanding a warship and leading a fight against a global pandemic, it has become clear over the last week that we—the human race—are literally all in the same boat in the fight against COVID-19. In this fight, our mission is clear: Win.

In these challenging times, leadership will be key to turning the tide of the battle against this virus, an enemy that, for now, seems undefeated.

As the captain of USS Fort Worth (LCS-3), I commanded one of the Navy’s newest, fastest, and most technologically advanced warships. I focused on three things above everything else: Vision, Values, and Culture. A few years ago, I shared our experiences in Build a Winning Team, highlighting how our focus ultimately led to one of the best winning streaks in the young history of our new class of warship.

These lessons, transplanted from the quarterdeck to quarantine, could be valuable to anyone fighting against the pandemic.

VISION

Where there is no vision, the people perish. – Proverbs 29:18 

As captain of Fort Worth, I went to great lengths to paint a vivid picture of our ship’s future. At every opportunity, I would describe in detail what victory looked like for us at each stage of our operations: inspections and maintenance, training, exercises, and ultimately mission accomplishment.

In the fight against COVID-19, there is currently fear, a lack of a unified global vision, and a pointless blame game playing out on the news, online, and in politics. Leaders must articulate a clear vision for our “crew” (our families, co-workers, and friends), one that inspires hope and mobilizes the planet. I envision a post-pandemic world that is closer and more interconnected as a result of us having embraced this fight together.

Think well to the end, consider the end first. Leonardo Da Vinci

This is my favorite quote. It’s what separates visionary leaders from merely good leaders. It is one thing to paint an inspirational vision, it is quite another to make that vision a reality. This requires that we do as Da Vinci suggests—think through problems all the way to the end and then work backwards to achieve those ends. Each action we take must be toward accomplishing that end. Leaders of every nation need to do some detailed voyage planning in cross-functional, cross-border teams, and chart the course ahead.

VALUES 

As captain of Fort Worth, I knew that to achieve our mission we would need to be physically, mentally, and emotionally strong, and we would require a deep level of mutual trust unrivaled in the fleet. We invested a lot of time building strength and trust, our core values. Those same values are even more important in this fight.

Strength 

We need to be strong in body and understand this will be a long fight. Take time to stay fit; if you are not fit, get fit now. Keep it simple. Eat well. Stop eating processed foods and put clean, whole food in your body. Eat lots of fruits and vegetables. Exercise. If you haven’t exercised before, start now. Do a few push-ups, sit-ups, yoga, and walk. Start small and build up. Do a little every day. Make it a habit at a set time every day. You can do a lot in your house. It’s also great to get outdoors—just maintain safe distances from others.

We need to be strong in mind. The fight against COVID-19 is ultimately an intellectual challenge. We will find a solution soon, but we need the brightest minds in the world focused on solving this problem, and they need the best tools at their disposal. I was recently in Silicon Valley visiting a quantum computing company, and I saw how capabilities to process information, model simulations, and propose solutions are light years beyond what was available the last time a pandemic ravaged the planet.

We need to be strong in soul. It does not matter what your religious faith is. The virus does not care. But what matters is that we all realize that the challenge ahead is big, and that it will tax us all down to our souls, to the core of what we each believe. I believe this test is an opportunity to strengthen our souls and find our inner peace.

Trust

On the USS Fort Worth, I sought to connect with my team on a deep, personal level. I encouraged strong bonds across all levels for two reasons: (1) As captain, I could not be in all places at all times, and (2) when a crew develops deep bonds, they will do anything to avoid letting their shipmates down.

To be successful in this coronavirus fight, we will need to build a deep level of trust among ourselves and in our institutions. Building trust starts with open and honest communication, but the greatest gains in trust are earned through deeds, not words. Building trust takes time. Losing trust can happen in an instance. The quickest way to lose the trust of your shipmate is to not do what you said you would do.

CULTURE

Culture eats strategy for breakfast. – Peter Drucker

As legendary thought leader Peter Drucker suggests, culture is the most critical element in building a winning team.

The culture we built on the Fort Worth can serve as a model for this fight. We created a culture that reflected our values and supported mission accomplishment. Winning was important to us and we were willing to work hard to get the win. Along the way, I wanted us also to be happy and humble.

Hard Working

Serving on a minimally manned ship challenges the crew: the work is hard, there is little personnel redundancy, everyone must be an expert in another’s job, and there is more than enough work to overwhelm the team. Some crews make the mistake of falling into a “woe is me” mindset, which ultimately leads to a victim mentality, low morale, and even lower performance. I never apologized for the amount of work we had to do, and I always reminded my team we were lucky to wear “U.S. Navy” on our chest and to go to sea in our warship.

We have much work ahead in this fight. Our healthcare professionals are leading the charge, demonstrating an unrelenting work ethic. You will not see them feeling sorry for themselves—they don’t have time. But they cannot do it on their own, and the rest of us need to grab an oar and pull to do our part. It is going to be hard, but the work can be the reward: the feeling of doing something very important when your neighbors need you most.

Happy

Never underestimate how important humor is to mission accomplishment. Even in the tensest circumstances, my crew knew they had the freedom to have some fun. Once they started playing practical jokes on me, I knew we had achieved our objective of creating a happy culture.

For those of you making memes, please don’t stop. A good laugh can lighten the darkest situation and turn someone’s day around. My favorite coronavirus meme is the one of a husband asked to choose between two options for quarantine: a) quarantine with your wife and children, or b), and before the announcer could even describe what “b” is, the husband emphatically chooses “B!” Never underestimate how important it is to laugh and be happy. According to the 2013 World Happiness Report, “Happy people demonstrate better cognition and attention, take better care of themselves, and are better friends, colleagues, neighbors, spouses, parents and citizens.”  In this fight, we will need all of the above.

Humble

Pride goes before the fall, and nothing can humble a captain like a warship. While winning was very important to us on board the Fort Worth, I was quick to remind my team that our work had to speak for itself, so it was pointless, even counter-productive, to be boastful.

Now is not the time for national pride or personal arrogance to get in the way of potential resolutions to this crisis. We all need to humble ourselves. In this fight, ideas matter above all else, not whose ideas they are or where they come from. I am reminded of an African Proverb I have written on the whiteboard in my office at the U.S. Mission to NATO: If you want to go fast, go alone. If you want to go far, go together. Today, we need to go far, fast. Ideas must take priority over pride. This is a time for unprecedented, unrestricted collaboration.

Final Thoughts – Winning

On Fort Worth I wanted to win, and I wanted a team of winners. “Hardworking, happy, and humble” meant nothing if we did not have victories to match. We had a simple rule: Celebrate the wins and learn from the losses. Moreover, we never shied away from losses, instead we used them to give life to our ship’s motto: “Just as iron sharpens iron, so too does one warrior sharpen another.”

We will win this coronavirus fight—together. We are up to the challenge and we will be a better “crew” because of it. The end state is a global crew that is healthy and more connected than ever before having sailed through this storm together. If we all work together, I predict this storm will abate soon. After years of sailing the magnificent oceans, captains develop a sixth sense for when a storm will pass, and we know on the other side the seas lie down quite beautifully.

 

What does artificial intelligence mean for cyber security? Prof Chris Hankin speaks to the House of Lords Select Committee.

Cyber attacks are considered one of the major threats for national security by the UK government. Artificial intelligence is considered to be a technology with major potential benefit. But what happens when these two worlds combine?

That’s exactly what the House of Lords Select Committee on Artificial Intelligence wanted to know. To find out more, they recently called in Professor Chris Hankin, Co-Director of the Institute for Security Science and Technology, to provide the panel with professional insight.

Below is a cut and edited summary of the evidence session. Some of the questions included have been rephrased. You can watch the full session online here.

 

What does artificial intelligence mean for cyber security today?

When I think about artificial intelligence in the context of cyber security today, I think mainly about machine learning, rather than broad artificial intelligence.

At Imperial, researchers have had success in using machine learning to analyse network traffic, learn what “normal” looks like, and spot anomalous things which might be indicative of a cyber attack.

This sort of approach is also used, for example, by the Darktrace, a UK company.

How successful is this approach?

It is a very exciting technology, and Darktrace has made a great commercial success out of it.

There are still some open research challenges to giving more accurate signals about what is going on, and reducing false positives. This is the focus of academic research across the world.

What might future developments of AI in cyber security look like?

In August 2016, a competition was held in the United States to develop automatic defensive systems that could understand when they were under attack, and then repair themselves and mitigate against the attack. Over, say, a 10 to 15-year horizon, we could be looking at that sort of technology being lifted to the level of systems. People often use the analogy of the human immune system when describing this potential technology.

Will only state-sponsored hackers have the means to deploy AI in cyber attacks, or is there a risk that AI-enabled cyber attacks will be “democratised” in the near future?

As Dr. Mark Briers articulated during his answer in the House of Lords, many of the “democratised” threats we see today probably came from state sponsored efforts some 10 years ago. Earlier this year in fact we saw hacking tools that were developed by the NSA being leaked online by a criminal hacking group. Looking forward 10 years, we might expect AI cyber weapons to follow the same path, from initially being developed by states, to becoming widely available.

This creates an additional problem in attribution; it is becoming much more difficult to differentiate between state actors and organised crime, as the sorts of techniques that those two groups are using to mount cyber attacks are increasingly similar.

Adversarial AI, which aims to disrupt artificial intelligence learning systems, is a current research topic. How much of an issue are recent developments in that field of adversarial AI for the deployment of AI systems in cyber security?

We have been doing some work on using adversarial AI to see how possible it is to train an attacker to evade the state-of-the-art cyber security detection algorithms, called classifiers, of the type we discussed earlier.

We’ve seen that if you can get into the right part of the system, you can learn a lot about what the cyber security classifier might be doing, and introduce noise into your attack to evade detection. The message I take from this is that, at the moment, AI is not the only answer we should be thinking about for defending our systems.

For example, let’s think about the Stuxnet malware that was used to delay the Iranians in their uranium enrichment process. The attack was essentially a physical attack, mounted through cyber, and in one version at least it caused the rotor blades in the enrichment centrifuges to spin at very high speeds.

An AI detector might have been able to detect that attack by looking at some network traffic, or maybe the adversarial AI approach might have evaded detection. Either way, if you had been standing anywhere near the centrifuges you would also have had a physical signal that something was going wrong.

How prepared is the UK for the impact of artificial intelligence on cyber security?

The UK’s NCSC has produced some very good advice for companies, government and private citizens about how to protect themselves. The sorts of attacks that we may be talking about, which are AI-based, will at the moment be probably no different from the sorts of attacks you see from human attackers, and so this advice is still valid.

Advice around cyber hygiene, such as keeping software up to date, having appropriate antivirus software, not sharing passwords with people etc. is very effective in reducing the impact of cyber threats. Unfortunately, the cyber attacks that have been most prominent in the news over the last year—WannaCry, NotPetya, Equifax—have all been the consequence of people running unpatched software, contrary to this advice.

What, in your view, is the single most important policy recommendation?

For the future, it is very important that we recognise that cyber security is a priority within the artificial intelligence area, and that a good number of studentships at all levels are funded to support this linkage between cyber security and AI.

Can we trust cyber-physical systems?

A post by Professor Emil Lupu, Associate Director of the ISST and Director of the Academic Centre of Excellence in Cyber Security Research.

It’s often reported that we can expect 30 billion IoT devices in the world by 2020, creating webs of cyber-physical systems that combine the digital, physical and human dimensions.

In the not too distant future, an autonomous car will zip you through the ‘smart’ city, conversing with the nearby vehicles and infrastructure to adapt its route and speed. As you sit in the back seat, tiny medical devices might measure your vitals and send updates to your doctor for your upcoming appointment. All of this will rely on IoT devices; internet-connected sensors and actuators dispersed throughout our physical environment, even inside our bodies.

On the minds of many, but not so often reported, is that by bringing the digital interface into the system you make it reachable from anywhere on the internet, and therefore, also to malicious actors. And by taking the computer out of a secure room, and putting it for example at street level, you make it vulnerable to someone physically compromising it. Can we trust these cyber-physical systems?

Sensors can lie

So what might these malicious actors do? At Imperial College London, we’ve shown that sensors which, for example, monitor for fires, volcano eruptions and health signals, can be made to lie about the data they report. This can have drastic consequences.

The below charts show bedside-sensor data from a healthcare setting. On the first chart, each vertical, dotted line represents an event when the health of the patient has been at risk. By compromising three sensors, as shown on the second chart, we can cancel all of these points and mask the events.

The consequences of this happening in the real world could be fatal. So we have started working on techniques to detect when sensors might be lying, by measuring the correlations between the measurements of different sensors.

Catching a lie

Using our techniques with fire sensors, we could detect a fake fire event even when it was located next to a genuine fire event. The below charts show the fire detection system – the chart on the right clearly highlights the fake event.

 

This also allows us to detect masked events – when someone is trying to hide an intrusion – and is powerful enough to distinguish these from benign false events.

Finally, we can also characterise and identify the sensors that are likely to be compromised, and calculate how many compromised measurements, or how many compromised sensors, a network can tolerate.

Corrupting artificial intelligence

But the risk doesn’t end there. To be useful to us, cyber-physical systems like driverless cars, or implanted medical devices, will use artificial intelligence techniques to learn how we behave and how the physical space around us changes.

The learning requires data from sensors, and as we’ve shown these can be compromised. Learning from this corrupted data could lead to our driverless cars, smart infrastructure and health monitors making the wrong decisions, with dramatic consequences.

This corruption is illustrated in the below diagrams of a machine learning algorithm, which classifies data into groups. You can see how the classification boundary changes when a single additional data point is inserted. In this case the point introduced seeks to maximise the overall error.

 

In this below case, which would be called a targeted attack, the introduced point seeks to make the red points be recognised as blues.

 

What stood out in our experiments was the low number of spoofed data points required to introduce fairly substantial error rates into the algorithm.

New attacks, new approaches

So far we’ve talked about the issues around compromised sensors. But there are many other issues that arise when we combine digital, physical and human dimensions in cyber-physical systems.

If this was just about the physical security, or just the cybersecurity, we’d be okay. We have the tools and techniques for reasoning about physical security, cybersecurity, and to talk about the trust we have in people. But we don’t really have techniques to analyse security for attacks that combine these three elements. So what do we need?

Firstly, with such attacks, we need to be able to perform risk evaluation in real-time. If some parts of the system have been compromised, what is the risk to the other parts of the system? Unfortunately, techniques for aggregating risk information don’t always scale very well, but research done within my group is addressing this through Bayesian techniques.

Secondly, we need to abandon the idea that we can entirely protect the system. Cyber-physical systems have much larger attack surfaces, and we should assume that the system will be compromised at some point. Instead, we need to develop the techniques that enable us to continue to operation in the presence of compromise to the system, or a part of it.

Thirdly, we need to design security techniques that allow us to combine the digital, physical and human elements. These all represent a threat for each other, but they can also complement each other in the protection of the system. The physical element can, to some extent, physically protect the digital and human elements. The human element can teach the cyber element how to behave, in order to monitor the physical space. And the cyber element can also monitor the behaviour of the humans involved in the system.

Success relies on trust, trust needs security

Artificial intelligence and the IoT have been much heralded as disruptive technologies with benefits that permeate society. Trust in these technologies will be the ultimate driver of their societal acceptance and overall success. If the systems are not secure, then they are not trustworthy.

Cyber-physical systems are already with us. We need to urgently address the security issues now to prevent loss of trust as we become more and more dependent on them.

Dr Emil Lupu is Professor of Computer Systems at Imperial College London. He leads the Academic Centre of Excellence in Cyber Security Research, is the Deputy Director of the PETRAS IoT Security Research Hub, and Associate Director of the Institute for Security Science and Technology.

The Security of Driverless Cars

A post by Dr Deeph Chana, Deputy Director for the Institute for Security Science and Technology. This blog first appeared as an opinion piece on GATEway project website, 19 September 2017. 

Image copyright Frank Derks under CC-CC-BY-2.0

The opportunities that driverless vehicles present are undoubtedly profound. None more so than the emergence of multi-modal transport services (trains, planes, automobiles … and boats) that will intelligently cooperate to take us from A to B without any human intervention.

Replacing the old biological controllers — namely us — the autonomous vehicle will excel in everything from energy efficiency to just being safe. The technology of today already affords us a near-term vision of the car where route planning and optimisation, refuelling and recharging, transactions with services (tolls , shops , parking lots), and authentication and hand-shaking for the purpose of site access control are all automatically achieved by the vehicle, without the human ‘in-the-loop’.

Removing the human from all of these piloting activities in concert, including that of physically maneuvering the vehicle, will prove to be the real transformation in experience that autonomy will bring to car users. The main outstanding technical piece needed to achieve this — the driving bit — is a problem that is rapidly being cracked by some of the largest and smartest companies in the world.

Removing the human from the driving seat

Furthermore, the use of artificial intelligence and deep-learning technology is poised not merely to deliver our replacement, but a significant upgrade. A ‘driver’ that will be better at learning, anticipation and adaption and one that will work tirelessly, around the clock. Driver 1.0 looks set, almost inevitably, for extinction. But, don’t worry if you’re feeling somehow obsolete, all of this will leave us with far more time to get on with the more important things in life like texting and motorway Tinder and will eliminate that potent source of stress, road rage — although there are no promises about the more general problem of rage on the road.

However, let’s leave the debate as to whether or not this transport paradigm-shift represents a psychological step forward for the road user for another day and settle for the fact that it certainly will be a technical leap-forward on how we go about the business of moving about.

Considering the comprehensive nature of the transformation we’re talking about, it is not unreasonable to ask if a re-think on what it means for a car to be secure and safe is motivated. Ironically, when we do pose the question, rather than the longer-term prospects of some kind of dystopian robo-world emerging, understanding how to be secure against humans emerges as the more pressing concern.

For whatever motivation — and there are plenty to choose from — humans are the most likely to seek the means and methods for compromising the whole operation; either by delivering costly nuisance cyber-hacks or by engineering complex orchestrated attacks that result in large scale economic hits or even the loss of life. Tragic incidents in urban settings around the world such as the most recent in Barcelona, illustrate how the car, even in its current form, may be used to generate terror and fear with global resonance and impact.

Paradoxically, the driverless car simultaneously represents an opportunity for virtually eliminating such incidents and the means by which their impacts could be greatly amplified. Both of these outcomes will be made possible by the unprecedented interconnectivity the car of the future will possess, where participation in a massive and distributed network of things including other cars, buildings, IoT devices, knowledge repositories and databases will provide access to huge computing power and a physical reach far beyond the individual car. Which outcome becomes reality rests on how well considered the design of this entire car-system will be to security problems and whether security will be ‘designed in’ from the start.

The argument that security is not the primary purpose of the car or that security incidents are generally not that likely to occur is a rationale that risks this aspect of the system’s design being given far less attention than it deserves. We might consider such arguments as rooted in the simplistic view of what we understand the car to be today rather than the reality of what it is about to become. It would be liberating and perhaps more in keeping with the technical revolution to consider the very concept of a car to be a fading reality, being replaced by a completely new mode of transport that bears only a superficial resemblance to the automobile. It may look like a car, move like a car, but in all other aspects it will not be one.

The Gateway project

Within the Gateway project — one of the UK’s autonomous vehicles urban demonstrators — we have been considering what security for driverless cars should look like in the near, medium and longer-terms. In the near-term we have examined the more practical aspects of securing vehicles that are being rapidly developed in the market by viewing our trial vehicles as moving cyber-physical systems: the driverless car is far more than just a moving piece of office IT. In the medium-term, problems such as ensuring that vehicles can trust connections to things around them with a digital pulse, including other cars, remains an open but tractable problem. Detecting security issues during the operation of such systems, countering problems in real-time and the legal ramifications of failure are all things that will keep our community and our wider networks working for some time to come.

How the Internet of Things poses fresh risks to public sector systems

A post by Professor Chris Hankin, Director ISST. This blog originally appeared on publictechnology.net published 19.06.2017.

With the cyber threat shifting its focus to sabotage rather than data theft, many of the defences deployed by public sector organisations will have to be adapted for the new world.

Information security policies are commonly guided by the CIA triad of confidentiality, integrity and availability. Many of the big security stories in the media relate to confidentiality, where data theft, for example, affects both individuals (eg. personal banking data) but also has a huge economic impact as a result of industrial espionage.

Integrity, or rather its loss, is most evident in the hijacking of websites by “hacktivists” seeking to deface content or replace it with political messages, but can also be associated with data, such as environmental monitoring, stock market trading or consumer price indices. Availability is often compromised by denial of service attacks as well as natural disasters, while recent high-profile ransomware incidents, when individuals and corporations are denied access to their data while a ransom is extorted, can also be counted in this category.

50 billion connected devices

The CIA triad addresses the security of information technology systems – our desktop computers, laptops, tablets and phones. Whilst these may represent the majority of the devices connected to the internet today, this won’t be the case in the future: there are currently around 15bn connected devices, set to rise by some estimates to more than 50bn by 2020.

This huge growth is fueled by the emergence of the Internet of Things in which connected devices control many aspects of our physical environment from home and leisure, autonomous vehicles through to city infrastructure. This move into the cyber-physical domain has already been presaged by the convergence of IT with the control systems built into our critical infrastructure and industrial processes. These systems were historically separated in both design and implementation, but economic necessity has driven them together. For example, the millions of lines of code which control a modern automobile has little separation between engine management and the car’s entertainment system.

New types of cyber attack

We have seen new types of cyber attack that are aimed at sabotage rather than data theft – such as the December 2015 attack on the Ukrainian power distribution network, commonly attributed to Russian involvement. The IoT hasn’t been immune either, with the emergence in late 2016 of the Mirai malware which targets machines running Linux and was used in the distributed denial of service attack (where multiple compromised systems, which are often infected, are used to target a single system) on the internet company Dyn. That attack was mounted through a network of Mirai-infected printers, domestic gateways, baby monitors and cameras (note, again, the lack of separation between consumer electronics and corporate and operational systems).

Many of the cyber defences deployed today will have to be adapted for this new world. It is unlikely that individual IoT devices (CCTV cameras, toasters) will have the computational power to run anti-malware software, and both safety and access considerations may militate against regular software updates. The increasing emphasis towards security-by-default (ensuring systems are set to the most secure settings) may help, but there is also likely to be a greater reliance on intrusion detection and prevention at the system level and a greater role for network monitoring.

These kinds of tools were traditionally based on recognising fixed patterns that are indicative of illegitimate behaviour, but there has been a recent trend towards tools based on anomaly detection. The latter tools use the artificial intelligence technique of machine learning to identify threats. Such techniques suffer from the so-called false positive problem – they may identify anomalies where none exist – but they are improving. Another problem is that it is sometimes difficult for the human monitor to understand why an ML algorithm has arrived at a particular decision. This is an important area for current research and we can expect to see rapid progress in this area. Since many artificial intelligence applications use machine learning, such advances are likely to have ramifications beyond the confines of cyber security.

The end of passwords

Another major area for change is in authentication. In the IT sector there is already the realisation that approaches based on strong passwords are not sustainable. GCHQ has produced more nuanced advice about passwords that recommends a number of simplifications. In future it is likely that other means of authentication will take on a more important and widespread role. Modern smartphones are already equipped with accelerometers (useful for gait recognition), fingerprint readers, microphones (voice recognition) and front-facing cameras (face recognition, retinal scan). A group of British universities recently developed a notion of cyber-metrics which supports authentication based on human-computer interaction, including measuring typing speed, pressure and interactions with a touch screen.

“In the IT sector there is already the realisation that approaches based on strong passwords are not sustainable”

It is clear the cyber world is changing and we can expect to see much more about cyber physical systems in the future. There are some exciting developments in the fields of artificial intelligence, particularly machine learning, and biometrics that will help to make us more secure. Expect to see rapid developments in the next few years as safety and security try to keep pace with increased user demands and technological capability.