The role of Artificial Intelligence in the transition from Information Assurance to Mission Assurance

23 October 2019

Barbara Poggiali, Managing Director of Leonardo’s Cyber Security Business, looks at the definition, concepts and roles of ‘mission assurance’ and suggests how to close the delta between current information assurance and future mission assurance.

With artificial intelligence, including machine learning, emerging as a powerful technique in cyber security, by challenging conventional thinking, this article addresses how these techniques can help us transition and the part they will play in delivering a full mission assurance scenario.

The development of cyberspace as a domain of operations, and the advances in understanding the many related factors that can affect the outcomes of a mission, necessitates the transition from traditional Information Assurance (IA) to Mission Assurance (MA). This involves making use of additional new data from a plethora of sources, such as those that make up the Recognised Cyber Picture, in order to play a key role in MA.

What is Mission Assurance in the operational context?

MA takes the view that completing the operational mission is the ultimate goal. By extension, this means that to ensure mission assurance, current cyber security approaches must evolve.

Cyber situational awareness and decision support are necessary feeds into an MA scenario and enable the command to take good decisions to keep the operation running. In this context, the key challenges for cyber in MA are in the execution phase of the mission, not during the planning stage.

Mission Assurance’s contribution to command decisions

While command makes decisions based on advanced planning, they also do so based on the real-time situation, and the best predictions they can make.

With MA being part of the command’s operational picture, the command will have an understanding of:

  • Their freedom of manoeuvre across and in cyberspace
  • That cyber activity is not limiting their freedom of manoeuvre in physical domains

In achieving this, cyber security can provide the right information for command to understand the extent that cyber issues threaten the ability to continue the mission and the availability of necessary associated resources. All this information must to be included in the operational picture.

As in other areas of the operational picture, the command’s required primary indication is the overall domain status. If the status is not ‘Green’, the aim is to understand the impacts and limitations that the issue will have on the ability to successfully achieve the mission. These include working with trusted and operationally relevant information which is timely and current. In a rapidly changing operational environment, this is challenging. In a cyber context specifically, attacks can be invoked without any of the conventional forms of notice being apparent. This presents a complex challenge, requiring sophisticated approaches to tackle it.

Using Artificial Intelligence to support Mission Assurance

Technology continues to develop at pace and initiatives such as the Military Internet of Things will become a reality in the very near future. This will result in evermore interconnected devices and systems, and greater volumes of data requiring analysis to realise its potential in supporting informed operational decisions.

But applying human intelligence to perform such analysis is increasingly impractical. The use of artificial intelligence (AI) to reduce human workload to manageable proportions is an obvious approach, since it allows machines to mimic human cognitive functions. Those machines can be used to make the remaining human workload achievable. Furthermore, AI techniques can reduce analysis timeframes and improve analytical accuracy.

Of the various AI techniques available, machine learning (ML) is highly relevant in reducing human workload since the algorithms develop and adapt as they learn more about the information being processed.

ML helps achieve useful and reliable data reduction by identifying the difference between an anomaly which is just a user doing a new task and an anomaly which is an attacker invoking a malicious act inside a system. By continuously adapting to the mission scenario, ML can be more flexible and more effective at suppressing noise than any pre-defined ruleset.

Ultimately, operational decision making will continue to be about speed of thought and action, which is particularly true in the cyber domain where attacks can be seeded over years, yet subsequently occur and take effect in micro seconds.

What are the consequences and considerations of using AI?

AI already exists in our environments – in general and cyber systems – and will continue to proliferate. We must therefore decide where it should sit in the processing chain and how close it can get to the commander. We also need to determine how much we rely on AI as opposed to more conventional and predictable techniques.

Since AI is not completely deterministic and cannot always provide a robust explanation of how a conclusion was reached, self-learning algorithms can develop relationships in data which have no causality in the real world and may therefore be an extremely dubious basis for a recommendation. We are seeing accidental unconscious bias occurring in automated algorithms.

We therefore require algorithms which give us at least a baseline level of trust in the outcome. An output of testing needs to be certified technologies and models which provide this trusted baseline.

There will also need to be a measure of predictability, demonstrating the algorithm’s sensitivity is appropriate, since ML algorithms are prone to unintended bias. We must understand bias in the military operational context and ensure it is eradicated.

While automation already supports military decision making, these systems have a well understood set of operating parameters and responses, which provides the basis for them to be certified as safe to use. However, it is vital to consider how this would work for an automated cyber algorithm, since existing technology means it is much harder to constrain AI techniques so that they operate within known parameters and boundaries.

Of course there is a great contradiction in wanting the flexibility and dynamic response which AI can deliver, whilst simultaneously wanting those algorithms to be bound by rigid constraints. A balance needs to be struck.

Future direction

Taking all this into consideration, there are clearly opportunities and risks associated with using AI to support the command, since the incorporation of cyber into the MA model is complex and demanding.

Therefore, AI algorithms must operate within the same framework that humans do, to ensure AI-driven decisions are justifiable, appropriate and compliant. This is not only a major challenge for the technology, but more significantly for us in assuring that position.

With this in mind, we are working with our cyber security stakeholders in this area to address and overcome the challenges, and ultimately deliver effective solutions to underpin MA.