Making sense of pen testing, part one

This is the first in a series of posts looking at the current state of pen testing as I see it and presenting some ideas for the future. In this post I will apply a framework to understanding the process of pen testing.

In the next post here I discuss some of the problems I see in pen testing.

Sensemaking

The pentesting process is a form of expert behaviour similar to intelligence analysis where there has been a lot of work understanding the key components of expert performance; this is often broken down into a process flow as follows:

Gather Information → Represent in Expert Schema → Develop Insight → Define Product or Action

An expert schema consists of the patterns and knowledge of what information is important when performing expert tasks that an expert develops through experience. It is also the encoding of raw information into a form that is useful to an expert. The expert schema provides the framework and filter through which large swathes of information can be reduced to relevant and useful information.

Developing insight is the manipulation of appropriately filtered and encoded information to identify meaning through developing and testing hypotheses. That meaning must have importance in the context of the eventual outcome in order to then be used to define a product or action leading to a desired outcome.

This is the classic ‘sensemaking’ process by which information is given meaning by an expert.

Sensemaking & Pentesting

There is a clear mapping between the sensemaking activities and the activities in a penetration test. For pen testing the process looks a little more like:

Scan & exploit → Characterise discovered vulnerabilities → Understand causes and impacts of vulnerabilities in the customer context → Recommend prioritised mitigations

Delivering the outcome

There is a final step that the intelligence community has learnt through bitter experience must be considered in a successful intelligence analysis process: Delivering the outcome.

The Deliver the outcome step is usually not directly completed by an Intelligence analyst but it provides the basis for why the sensemaking process is required in the first place. The meaning derived from sensemaking is academic without some resulting outcome from use of the product to achieve some form of change.

This is similar to pentesting, if the customer doesn’t mitigate the vulnerabilities found and reported by the pen tester then what was the point of the test in the first place?

Not delivering the outcome

If a pen tester retests a customer and discovers that they haven’t fixed a previously reported vulnerability then it is very easy to focus on the Delivering the Outcome step and point the finger at the customer for failing to act. However I think this often misses a flaw earlier in the  pen testing process.

In my experience many pentesters skip from representing their findings in their expert schema (characterising discovered vulnerabilities) directly to defining a product or action (recommending prioritised mitigations) without generating any insight (Understanding the causes and impacts of vulnerabilities in the customer context).

Without any insight then there is no impetus for the customer to change their organisational behaviours and without an explanation of why some vulnerabilities are important to the business the client will find the prioritisation delivered through the expert schema impenetrable (As the client is rarely an expert). Both of these conditions are more likely to lead to paralysis of action than delivering the outcome as the customer doesn’t understand why they should do anything or really what they should do.

 

In the next post I will look more closely at some of the problems I see in pen testing currently and will follow that up with a post looking at how things might change to solve them.

5 thoughts on “Making sense of pen testing, part one

  1. Nice post. I’m glad that you feel our pain.

    This is the pentesters’ dilemma isn’t it? – the only way that pentesters can understand the causes and impacts of vulnerabilities in the customer context is by reading and understanding (or their coleagues writing?) the risk assessment.

    Are we hiring pentesters or risk experts or do we expect them to be both now?

    I keep carping on about this but there is plenty of attack path analysis material out there to show a customer how cascade and compound failures gives an attack path resulting in wholesale compromise. That’s useful intelligence that adds real value especially if it shows that the fixing of an upstream vulnerability severely limits scope further down the attack tree.

    In all honesty, security analysts spend an inordinate amount of time convincing business owners of the importance of vulnerabilities in isolation (there are often a number of reasons why it has to be done this way though I admit). That is why the usual flat list of vulnerabilities (including false +ves) commonly seen in pentest reports is not actually all that useful in getting things fixed.

  2. I think one of our problems is customers are expecting testers to be risk experts as well but testers are expecting customers to understand testing.

    Attack path analysis is still to my mind a more advanced schema for expert analysis. It encodes the experts views of how attacks might occur in the particular business and can be used for testing hypotheses regarding the security of particular business assets or the quality and coverage of delivery of security defenses by the business.

  3. Working on a tool to do attack tree analysis at the mo – details will be released to the restricted audience, when I’m not ploughing through my favourite book right now, which is Ross Anderson’s 1,000 page monolith!

Comments are closed.