We need to talk about IT

It has long been a truism of security practitioners that security is not an IT problem. This is an attempt to lift the gaze of the security team from technology to the wider business. A laudable and useful goal. However, IT is a security problem.

As we have moved from huge engineering programs to build computing systems that last for 25 years or more to smaller projects that deploy in months and ultimately to agile continual development with a constant trickle of change in hours we have made trade-offs. The misunderstanding of the Faster, Better, Cheaper (FBC) approach that assumes only two of those characteristics can exist at the same time has become almost an excuse for failure and the reason for bish-bosh IT that expects to be replaced so quickly that no real thought is given to long term implications. In choosing to believe only two of FBC is possible IT has invariably chosen Faster and Cheaper. There are much vaunted business reasons for this; time to market, failing fast, responsiveness to stakeholders, efficiency and in many cases those are measurable benefits of the approach but it is worth noting that the high-profile adopters of FBC, NASA [PDF], found that it was possible to achieve all three characteristics in a an approach that reflects much of what has later come be called Agile.

There is a problem though. That problem is being brought into the light by cyber security. We are increasingly finding ourselves in a dogfight with organised criminal groups or sometimes state-sponsored militia who worked out that automation, service-based extended enterprises and aggressive outsourcing gets them inside our Observe-Orientate-Decide-Act (OODA) loops. For security professionals that has meant an increase in the importance of monitoring, threat intelligence, analytics and other technologies or activities that contribute to our situational awareness to improve our Observe and Orientate as well as a focus on our incident response plans and capabilities to improve our ability to Decide and Act. As we decrease the time to get round the security functions OODA loop we are discovering that protecting an enterprise from cyber attacks requires us to run in ITs OODA loop and that’s a problem.

IT has been been building multiple systems to deliver the local maxima value where the business function or team exists. IT has also been building these localised systems quickly and cheaply, eschewing better as unachievable. The result is our enterprises are complex systems of systems, the management platforms for our enterprises are themselves diverse, complex, incomplete and slow. We don’t know where all our IT is, we don’t know what it does and we can’t change it or patch it safely either in coverage or timely enough to affect our adversary’s OODA loop. One of my clients, in an admittedly huge financial services business, reported that his IT function had deployed over 44 million patches in the previous year. That doesn’t scale using the management tooling and platform design that IT tends towards to meet local maxima value.

When I raise this I sometimes get told “there’s nothing we can do”, “the business doesn’t care about the complexity they care about pace”, “that’s just the way it is”. I don’t buy it. There are now global scale cloud businesses or social businesses (Amazon, Salesforce, Google, Facebook, Twitter, Yahoo etc) that have faced down this problem and by not accepting the limitations of traditional small-scale, local maxima focused, IT and have built platforms to deliver the global maxima value for their enterprises. Not systems of systems but true platforms onto which they deploy their applications, some in a much more agile manner than most large enterprises dare come close to yet. Their success at automating and managing the security of their IT platforms is in sharp contrast to the ongoing visible failures of the management of security in systems of systems enterprises relying on a patchwork of localised systems to deliver value.

We will eventually convince the security vendors to allow us to automate our security controls, we will professionalise and normalise situational awareness and we will develop a cadre of capable and prepared incident responders but until IT designs truly Faster and Better and Cheaper platforms for the global maxima value of the enterprise rather than the local maxima of the individual business function our limitations will be those of IT.

Misinterpreted policy?

A couple of months ago I was home ill from work and frankly a little bored.

While idly reading my twitter feed I reflected on a challenge I had been facing at work; a very technology-focused, agile, team that seemed to move faster than the security team could handle. I had some time ago realised that short of a herculean hiring effort we needed a combination of automation, delegation and good engagement to achieve the security outcomes we desired.

At about the same time as addressing that challenge I had also been involved in the production of updated acceptable use policy to meet some PCI DSS requirements which had been a lightly bruising affair. The business is a startup culture where freedom and good sense are valued much more highly than rules. The noticeably positive culture of the organisation was rooted in this and as a result the managers resisted the imposition of new rules. It was also the case that the staff cried out for information and knowledge so they could make their own minds up about security, they wanted security awareness training as long as it explained why security mattered and how it worked.

The combination of a fast moving technology team, the startup culture and the positive results of just good security communications and engagement was that a written policy seemed anachronistic and almost fossilised.

I posted the following provocative, somewhat tongue in cheek, but honest question:

Questioning security policies

This started a twitter conversation with a number of security professionals. I enjoyed the conversation but I found it frustrating, I was trying to get to bottom of the real value of security policies but the conversation didn’t really address that and seemed to focus more on the format of successful security policies. It was a fun conversation to have while at home without the distraction of work but in hindsight I forgot two truths of the Internet:

  1. A conversation on the Internet is a public thing.
  2. Written text on a screen is interpreted without any other context

I did realise after a while that the conversation was getting a little out of control so I summarised my exit with:

Agree to disagree


The conversation didn’t descend into one of those Internet shaming episodes (thankfully) but I was surprised to receive a ‘storified’ version of the conversation (Here if you would like to read it).

It turns out there was an answer to my original question that makes sense to me that came out of the Storify comments by Rowenna Fielding: “If there is no written policy then how can risk decisions be made consistently and in line with the organisation’s risk appetite?”. That makes sense to me and is a foundation you can build upon with a focus on driving consistency in security risk decision making.

I thought that was the end of it, an interesting if somewhat frustrating conversation in a medium that probably doesn’t lend itself to these sorts of philosophy of security debates. I was wrong. A month and a half later this conversation spawned two guest blog posts on the Tripwire blog; Security Policies – To Be Or Not To Be Pointless… and Corporate Security Policies: Their Effect on Security, and the Real Reason to Have Them. I’ve subsequently been asked my opinions on these blogs hence this blog.

Reviewing these blogs I think I was not good at communicating my question and was misinterpreted. Given the consistency of the misinterpretation I must assume it was my fault.

To quote from the Storify comments “I wasn’t quite sure whether this gentleman was actually advocating for the abolition of the written policy” in response to my original question – I wouldn’t say I went as far as advocating (Advocate: A person who publicly supports or recommends a particular cause or policy) as much as questioning. I think Storify is a fascinating mechanism for seeing how someone else interprets your words but it does feel like a one-sided conversation, kudos to the author for taking the time and effort to produce it though.

To respond to the first blog post by Sarah Clarke Security Policies – To Be Or Not To Be Pointless… “Phil’s core and continuing assertion was that good tech, awareness and risk management negated the need for any written security policies.” – Again I wouldn’t say I asserted that they negated the need but I wasn’t (during the conversation at least) convinced of their inherent value as a format and/or mechanism for achieving the outcomes that are somewhat foggily assigned to them. I think this blog post reiterates much of the valuable accepted wisdom about policies among practitioners who want to do more than tick the security compliance box on their management checklist.

To respond to the second blog post by Claus Cramon Houmann  Corporate Security Policies: Their Effect on Security, and the Real Reason to Have Them “The discussion was started by a person critically stating that as far as he was concerned, they have no value at all.”- Um Actually no, that is wrong. I started by questioning their value and ended up by saying I hadn’t been convinced of their value. I really hope I didn’t appear as arrogant as that quote from this blog post suggests. I think this blog lists a pragmatic selection of reasons to have security policies but skips what is for me the key reason, risk communication.

I fundamentally believe that helping and nudging staff to make ‘better’ risk decisions on behalf of the company is the ultimate aspiration of a security team who cannot be there looking over their shoulders, telling them what the security team thinks they should do. Security is ultimately a people issue and we cannot effectively manage it primarily through technology. Good risk communication is about helping staff make ‘better’ decisions, decisions whose risk outcome is more favourable to the organisation. Bad risk decisions (unfavourable in outcome to the organisation) will efficiently route around all the technology controls you deploy.

I thoroughly recommend people read these blogs and I commend their authors for writing on policy as it is an area where many untested historical assumptions lie. I think questioning the orthodoxy that has been built around the security ‘profession’ is important, I also think we should start measuring the effectiveness of policy as a method for communicating risk appetites and driving consistency in decision making. The hypothesis is that a security policy increases the consistent risk decision making within the organisations stated risk appetite compared to an organisation without policy, now that would be a *very* interesting and important study to conduct.

The conversation was interesting, fun, a little frustrating and in hindsight much too easy to be misinterpreted. I wonder if those are some of the key characteristics of Twitter itself.


Security Analytics Beyond Cyber

I presented at 44con 2014 on moving security analytics on from network defense and rapid response towards supporting data-driven and evidence-driven security management, my presentation is on slideshare below:

A video of the talk is for sale from 44con here:

44CON 2014 Conference Videos – Trailer from 44CON on Vimeo.

Security Analysis for Humans

Following a highly enjoyable and usefully challenging conversation with Eric Leandri from Qwant.com I was inspired to consider some guiding principles for conducting security analysis.

With an obvious hat tip to the Zen of Python the following is what I am aspiring to meet in the increasingly data-driven security consulting work I am engaged in:


If it’s hard to explain, it’s probably bad analysis.

If you’re not making a decision easier what’s the point?

Hypotheses without goals are pointless.

Measurement without hypothesis is not analysis.

Explicit and transparent analysis matters.

Beautifully designed output matters.

Readability matters.



I’d love feedback from anyone else working in the field.

Protecting Information About Networks, The Organisation and Its Systems

I recently wrote a report with a number of colleagues for the Centre for the Protection of National Infrastructure (CPNI) on the Network Reconnaissance phase of a targeted attack following initial exploitation. The report covers what is targeted, how the attackers operate and what controls help. Below is a summary infographic and below the cut is the briefing presentation I delivered and the full report.


Read the rest of this entry »

Twitter RSS