I get a lot of pen test reports to read. They vary from beautifully crafted prose extolling the skilled exploitation of the system by security testing artistes to functional dumps of tool output into a word format by jobbing vulnerability scanners.
Usually I read that report once, I use the summary to know what detail I need to understand and use the the risk or vulnerability tables to pinpoint the urgent issues to fix. Those vulnerability tables are then transfered to spreadsheets where extra columns tracking the management of the issues identified are added and populated.
These spreadsheets become the key tool for managing issues. Usually I manage a spreadsheet per system and include within it other outstanding security risks and issues discovered outside the penetration test affecting that system. The operational security meetings I hold with the senior security managers from my suppliers and the risk management representatives from my stakeholders are driven by the walk through of identified or outstanding issues, calling out those that don’t seem to be getting fixed or are receiving push-back from admin teams in the supplier environments. These meetings also provide stakeholder risk management representatives the opportunity to review proposed resolutions to see if their risks are being well managed (It is after all their risk not mine).
I remember lots of discussions when I was on the testing side of the industry trying to divine what customers actually wanted in their reports, usuall by digging through the guts of a failed or failing client relationship where technical reports ended up in auditors hands or vice versa.
Now I am an end user of these reports it is clear what I need:
- 1) A high level summary that tells me the general feel for state of the system security is. I need this to answer the question; am I looking at a fundamental flaw stopping go-live or just a generally shoddy build process that needs tidying up? This needs to be aimed at a business user as that’s where this gets sent after I read it. If you’re going to add nice graphs or some sort of visualisation do it here (And read Edward Tufte first!)
- 2) A tabular list of issues including a title, a description, a list of hosts affected, an estimation of how bad it is and a suggested fix or workaround. If you can give this to me in a spreadsheet format you’ve just bought me time and my gratitude.
The estimation of how bad an issue is could be as simple as high/medium/low, more actionable data for me to work with would include ease of exploit, complexity to fix and the extent of resulting compromise. Ease of exploit plays back into my risk model so I can see if I need to worry about the issue, complexity to fix plays into my remediation plan and the risk management decision on going live and the extent of resulting compromise is used to drive behaviours from non-security team members who don’t get the problem.
I hope the key message for this blog post is I need to fix or manage the problems found in pen tests and everything that makes my job easier means you were more effective at your job.
That’s what I need from a pen test report.
What I want is pretty much pie-in-the sky for pen tests these days but it would be great to get a test script describing how a particular issue was found. Which tool was used? Was it confirmed with multiple tools / techniques? If we want to see if we fixed it what command line switches do we need?
A major bonus for me would be an archive of the shell history from the pen testers laptop with regular time/date stamps or maybe a video of the testers desktop. This means I can test a single issue to see if it’s fixed without needing a full blown follow up pen test.
I would happily trade time spent writing the flowery report for an actionable and usable list of issues and a test script for each one.