Don’t over think cyber risk

I have been overthinking cyber risk. I’ve been trying to build a reliable model that I could rely on to mechanism my risk assessments. I’ll continue to refine my ideas because I enjoy the intellectual challenge. However, I am of the opinion that until we have the cybersecurity equivalent of Fischer Black, we need to accept the inherent inaccuracy of cyber risk modelling and use it to support quick decision-making rather than build grand unifying frameworks that ultimately only serve to mislead us further. It’s vital to start scouring the vast quantities of data we create for feedback on our decisions rather than try to base them on a level of knowledge that is not achievable.

At this point in state of the art, cybersecurity risks may be too hard to model reliably due to many factors. That is not to say that cyber risk assessment shouldn’t happen, as Jack Jones says “You can’t avoid measurement“, but the limitations of cyber risk management should be recognised, and any strategic response to cyber risk should account for that.

An intelligent adversary is responding to our risk management activities, with a nod to John Lambert, these adversaries model their attacks in graphs or networks whereas we tend to model risks in lists. Adversaries may or may not cause maximal harm, and risk impact is a distribution as a single adversary attack may cause multiple different business impacts based on their goals rather than our exposure.

Our technology platforms or estates have become so complicated we often don’t fully understand them, know what they all are or know what they do. This lack of complete knowledge leads to both an incomplete view of vulnerabilities and often a highly volatile changeable environment.

Our technology platforms have an uncounted number of relatively low probability risks that individually deliver relatively low impact outcomes, but when these risks are chained together in a graph or network (as per any particular attack chain) and end up touching our critical data or systems, they become impossibly improbable catastrophes. We have a lot of data about the high rate of occurrence non-material attacks, but we have almost no data on the low occurrence material attacks, and these are what matter most.

Our adversaries commonly exploit human behaviour which is only predictable at scale and our cybersecurity control effectiveness is often unmeasured and regularly degrades over time as our adversaries adjust their behaviour to reflect our control environment.

These factors mean that there is a lot of uncertainty in any security risk scenario that is granular enough to drive investment or actions decisions directly. Better risk models such as FAIR expose this uncertainty, terrible risk models seek to hide it. Cyber risk analysts commonly conduct little analysis of uncertainty and rarely any sensitivity analysis of their cyber risk models. As a result outcome uncertainty is suppressed and cyber risk model outcomes tend towards ‘cybergeddon’.

Given a choice between ‘cybergeddon a’ and cybergeddon b’ executive managers use relative costs of investment or action to judge risk appetite rather than the risk characteristics. Executive managers are unable to distinguish effective cybersecurity from poor cybersecurity until an event occurs.

Often a cyber risk reduction activity focuses on goals, not outcomes this allows cybersecurity practitioners to suppress or ignore uncertainty in the cyber risk models they rely upon to make decisions. However, the response from executive managers to focus on investment costs in this lemon market for risk analysis is rational as much of the consequence from cyber events affects external parties (Un-costed externalities) and actual outcomes tend to be non-linear in relation to cyber risk model inputs.

If cyber risks are highly uncertain, but the anecdotally-observed rate of occurrence is high, and there is a potential for non-linear outcomes then assuming that cyber risks events will happen rather than may happen is a good strategy. The key objectives of this approach are to reduce the consequences of cyber risks to manageable levels. This objective requires cyber resilience, including response and recovery, as described by the increasingly popular NIST Cyber Security Framework.

The pragmatic use of cyber risk assessment is to provide a consistent scale for comparison and prioritisation of decisions in a model where we assume cyber risk events will happen. In doing so, we must be careful to ensure that we don’t fall into the trap of believing the cyber risk model outputs are an accurate and reliable measurement of any individual risk.