This is the first in a series of posts looking at the current state of pen testing as I see it and presenting some ideas for the future. In this post I will apply a framework to understanding the process of pen testing.
In the next post here I discuss some of the problems I see in pen testing.
The pentesting process is a form of expert behaviour similar to intelligence analysis where there has been a lot of work understanding the key components of expert performance; this is often broken down into a process flow as follows:
Gather Information → Represent in Expert Schema → Develop Insight → Define Product or Action
An expert schema consists of the patterns and knowledge of what information is important when performing expert tasks that an expert develops through experience. It is also the encoding of raw information into a form that is useful to an expert. The expert schema provides the framework and filter through which large swathes of information can be reduced to relevant and useful information.
Developing insight is the manipulation of appropriately filtered and encoded information to identify meaning through developing and testing hypotheses. That meaning must have importance in the context of the eventual outcome in order to then be used to define a product or action leading to a desired outcome.
This is the classic ‘sensemaking’ process by which information is given meaning by an expert.
Sensemaking & Pentesting
There is a clear mapping between the sensemaking activities and the activities in a penetration test. For pen testing the process looks a little more like:
Scan & exploit → Characterise discovered vulnerabilities → Understand causes and impacts of vulnerabilities in the customer context → Recommend prioritised mitigations
Delivering the outcome
There is a final step that the intelligence community has learnt through bitter experience must be considered in a successful intelligence analysis process: Delivering the outcome.
The Deliver the outcome step is usually not directly completed by an Intelligence analyst but it provides the basis for why the sensemaking process is required in the first place. The meaning derived from sensemaking is academic without some resulting outcome from use of the product to achieve some form of change.
This is similar to pentesting, if the customer doesn’t mitigate the vulnerabilities found and reported by the pen tester then what was the point of the test in the first place?
Not delivering the outcome
If a pen tester retests a customer and discovers that they haven’t fixed a previously reported vulnerability then it is very easy to focus on the Delivering the Outcome step and point the finger at the customer for failing to act. However I think this often misses a flaw earlier in the pen testing process.
In my experience many pentesters skip from representing their findings in their expert schema (characterising discovered vulnerabilities) directly to defining a product or action (recommending prioritised mitigations) without generating any insight (Understanding the causes and impacts of vulnerabilities in the customer context).
Without any insight then there is no impetus for the customer to change their organisational behaviours and without an explanation of why some vulnerabilities are important to the business the client will find the prioritisation delivered through the expert schema impenetrable (As the client is rarely an expert). Both of these conditions are more likely to lead to paralysis of action than delivering the outcome as the customer doesn’t understand why they should do anything or really what they should do.
In the next post I will look more closely at some of the problems I see in pen testing currently and will follow that up with a post looking at how things might change to solve them.