Competing Innovations in Cyber

I have had a series of productive discussions with a colleague over the last year about the differences in adopting new innovations between cyber attackers and cyber defenders. His interesting, and itself innovative, contention is that a key problem in cyber security is created by the differently shaped innovation adoption curves between defenders and attackers. Also that by investing in changing the shape of defenders adoption curves the nature of the competition itself will be re-shaped. (I suspect I am doing my colleague something of a disservice with my summary).

Diffusion of Innovation Curve

Diffusion of Innovation Curve

This comparison of innovation adoption rests on a strong body of academic economics research originally developed in the 1950s and effectively summarised by Everett M. Rogers in his book on the Diffusion of Innovations. I like the approach as it highlights the speed of action as a key component of cyber defence, in this case adopting innovative technologies or tactics, techniques and procedures (TTPs) quickly in order to respond to attackers innovations.

However, from my experience both advising CISOs and acting as an Interim CISO I do wonder if there is a related theory of innovation that applies. I would contend that most defensive security controls and TTPs are sustaining in nature as defined in Clayton Christensen’s controversial work in the Innovators Dilemma. Sustaining innovations show gradual improvement in contrast with disruptive innovations that jump to the next level of performance or value much faster.

Disruptive vs Sustaining Innovations

Disruptive vs Sustaining Innovations

The key problem of competition is that the most damaging attacks use disruptive innovations, potential black swans, that our sustaining defences cannot hope to defeat. My worry is that focusing on improving our speed of adoption of defensive innovations immediately places us at a disadvantage due to the sustaining nature of our defences. The serious and sometimes less serious adversaries jumped onto disruptive cyber attack innovations some years ago including industrialised attacks, camouflaged command and control channels, user-focused attacks and zero day or near zero day attacks.

I do think there is definite value here, I agree with Josh Corman that an unfortunate number of CISOs estimate their adversaries capabilities in relation to their auditors and external pen testers which places them squarely targeting defensive capabilities and adoption rates following HD Moore’s law leaving a serious gap between defensive and offensive capabilities from the most serious attackers. If we can highlight this issue CISOs and more importantly CEOs raise their game through improving speed of adoption of defensive cyber innovations, the next step is to ensure that the defensive innovations are effective against the disruptive offensive innovations.

My own contention is that measuring firms speed at traversing an Observe-Orientate-Decide-Act (OODA) loop (following Robert Boyd’s work on air fighter combat) is more effective as the the fundamental competition is one between the adversaries and the network defenders.

The OODA Loop

The OODA Loop

This shares concepts with a dogfight and once the fight has started the winner is the one that gets inside their opponents OODA loop first. If we can quickly adopt innovations that target the acceleration of our defensive OODA loops we stand a chance at not losing.

Automated IT infrastructure, automated incident response, cyber data analytics, pro-active threat hunting, information sharing, capability sharing and external hygiene monitoring of partners and suppliers are all defensive innovations that present us with the potential for ‘disruptive’ defences that may accelerate our network defence OODA loop. How’s your adoption rate on these technologies?

Pitfalls of Cyber Data

I jointly presented with Ernest Li at 44con Cyber Security on April 28th 2015 discussing how we use public cyber data and some of the problems we have run into. My presentation is on slideshare below:

We need to talk about IT

It has long been a truism of security practitioners that security is not an IT problem. This is an attempt to lift the gaze of the security team from technology to the wider business. A laudable and useful goal. However, IT is a security problem.

As we have moved from huge engineering programs to build computing systems that last for 25 years or more to smaller projects that deploy in months and ultimately to agile continual development with a constant trickle of change in hours we have made trade-offs. The misunderstanding of the Faster, Better, Cheaper (FBC) approach that assumes only two of those characteristics can exist at the same time has become almost an excuse for failure and the reason for bish-bosh IT that expects to be replaced so quickly that no real thought is given to long term implications. In choosing to believe only two of FBC is possible IT has invariably chosen Faster and Cheaper. There are much vaunted business reasons for this; time to market, failing fast, responsiveness to stakeholders, efficiency and in many cases those are measurable benefits of the approach but it is worth noting that the high-profile adopters of FBC, NASA [PDF], found that it was possible to achieve all three characteristics in a an approach that reflects much of what has later come be called Agile.

There is a problem though. That problem is being brought into the light by cyber security. We are increasingly finding ourselves in a dogfight with organised criminal groups or sometimes state-sponsored militia who worked out that automation, service-based extended enterprises and aggressive outsourcing gets them inside our Observe-Orientate-Decide-Act (OODA) loops. For security professionals that has meant an increase in the importance of monitoring, threat intelligence, analytics and other technologies or activities that contribute to our situational awareness to improve our Observe and Orientate as well as a focus on our incident response plans and capabilities to improve our ability to Decide and Act. As we decrease the time to get round the security functions OODA loop we are discovering that protecting an enterprise from cyber attacks requires us to run in ITs OODA loop and that’s a problem.

IT has been been building multiple systems to deliver the local maxima value where the business function or team exists. IT has also been building these localised systems quickly and cheaply, eschewing better as unachievable. The result is our enterprises are complex systems of systems, the management platforms for our enterprises are themselves diverse, complex, incomplete and slow. We don’t know where all our IT is, we don’t know what it does and we can’t change it or patch it safely either in coverage or timely enough to affect our adversary’s OODA loop. One of my clients, in an admittedly huge financial services business, reported that his IT function had deployed over 44 million patches in the previous year. That doesn’t scale using the management tooling and platform design that IT tends towards to meet local maxima value.

When I raise this I sometimes get told “there’s nothing we can do”, “the business doesn’t care about the complexity they care about pace”, “that’s just the way it is”. I don’t buy it. There are now global scale cloud businesses or social businesses (Amazon, Salesforce, Google, Facebook, Twitter, Yahoo etc) that have faced down this problem and by not accepting the limitations of traditional small-scale, local maxima focused, IT and have built platforms to deliver the global maxima value for their enterprises. Not systems of systems but true platforms onto which they deploy their applications, some in a much more agile manner than most large enterprises dare come close to yet. Their success at automating and managing the security of their IT platforms is in sharp contrast to the ongoing visible failures of the management of security in systems of systems enterprises relying on a patchwork of localised systems to deliver value.

We will eventually convince the security vendors to allow us to automate our security controls, we will professionalise and normalise situational awareness and we will develop a cadre of capable and prepared incident responders but until IT designs truly Faster and Better and Cheaper platforms for the global maxima value of the enterprise rather than the local maxima of the individual business function our limitations will be those of IT.

Misinterpreted policy?

A couple of months ago I was home ill from work and frankly a little bored.

While idly reading my twitter feed I reflected on a challenge I had been facing at work; a very technology-focused, agile, team that seemed to move faster than the security team could handle. I had some time ago realised that short of a herculean hiring effort we needed a combination of automation, delegation and good engagement to achieve the security outcomes we desired.

At about the same time as addressing that challenge I had also been involved in the production of updated acceptable use policy to meet some PCI DSS requirements which had been a lightly bruising affair. The business is a startup culture where freedom and good sense are valued much more highly than rules. The noticeably positive culture of the organisation was rooted in this and as a result the managers resisted the imposition of new rules. It was also the case that the staff cried out for information and knowledge so they could make their own minds up about security, they wanted security awareness training as long as it explained why security mattered and how it worked.

The combination of a fast moving technology team, the startup culture and the positive results of just good security communications and engagement was that a written policy seemed anachronistic and almost fossilised.

I posted the following provocative, somewhat tongue in cheek, but honest question:

Questioning security policies

This started a twitter conversation with a number of security professionals. I enjoyed the conversation but I found it frustrating, I was trying to get to bottom of the real value of security policies but the conversation didn’t really address that and seemed to focus more on the format of successful security policies. It was a fun conversation to have while at home without the distraction of work but in hindsight I forgot two truths of the Internet:

  1. A conversation on the Internet is a public thing.
  2. Written text on a screen is interpreted without any other context

I did realise after a while that the conversation was getting a little out of control so I summarised my exit with:

Agree to disagree

 

The conversation didn’t descend into one of those Internet shaming episodes (thankfully) but I was surprised to receive a ‘storified’ version of the conversation (Here if you would like to read it).

It turns out there was an answer to my original question that makes sense to me that came out of the Storify comments by Rowenna Fielding: “If there is no written policy then how can risk decisions be made consistently and in line with the organisation’s risk appetite?”. That makes sense to me and is a foundation you can build upon with a focus on driving consistency in security risk decision making.

I thought that was the end of it, an interesting if somewhat frustrating conversation in a medium that probably doesn’t lend itself to these sorts of philosophy of security debates. I was wrong. A month and a half later this conversation spawned two guest blog posts on the Tripwire blog; Security Policies – To Be Or Not To Be Pointless… and Corporate Security Policies: Their Effect on Security, and the Real Reason to Have Them. I’ve subsequently been asked my opinions on these blogs hence this blog.

Reviewing these blogs I think I was not good at communicating my question and was misinterpreted. Given the consistency of the misinterpretation I must assume it was my fault.

To quote from the Storify comments “I wasn’t quite sure whether this gentleman was actually advocating for the abolition of the written policy” in response to my original question – I wouldn’t say I went as far as advocating (Advocate: A person who publicly supports or recommends a particular cause or policy) as much as questioning. I think Storify is a fascinating mechanism for seeing how someone else interprets your words but it does feel like a one-sided conversation, kudos to the author for taking the time and effort to produce it though.

To respond to the first blog post by Sarah Clarke Security Policies – To Be Or Not To Be Pointless… “Phil’s core and continuing assertion was that good tech, awareness and risk management negated the need for any written security policies.” – Again I wouldn’t say I asserted that they negated the need but I wasn’t (during the conversation at least) convinced of their inherent value as a format and/or mechanism for achieving the outcomes that are somewhat foggily assigned to them. I think this blog post reiterates much of the valuable accepted wisdom about policies among practitioners who want to do more than tick the security compliance box on their management checklist.

To respond to the second blog post by Claus Cramon Houmann  Corporate Security Policies: Their Effect on Security, and the Real Reason to Have Them “The discussion was started by a person critically stating that as far as he was concerned, they have no value at all.”- Um Actually no, that is wrong. I started by questioning their value and ended up by saying I hadn’t been convinced of their value. I really hope I didn’t appear as arrogant as that quote from this blog post suggests. I think this blog lists a pragmatic selection of reasons to have security policies but skips what is for me the key reason, risk communication.

I fundamentally believe that helping and nudging staff to make ‘better’ risk decisions on behalf of the company is the ultimate aspiration of a security team who cannot be there looking over their shoulders, telling them what the security team thinks they should do. Security is ultimately a people issue and we cannot effectively manage it primarily through technology. Good risk communication is about helping staff make ‘better’ decisions, decisions whose risk outcome is more favourable to the organisation. Bad risk decisions (unfavourable in outcome to the organisation) will efficiently route around all the technology controls you deploy.

I thoroughly recommend people read these blogs and I commend their authors for writing on policy as it is an area where many untested historical assumptions lie. I think questioning the orthodoxy that has been built around the security ‘profession’ is important, I also think we should start measuring the effectiveness of policy as a method for communicating risk appetites and driving consistency in decision making. The hypothesis is that a security policy increases the consistent risk decision making within the organisations stated risk appetite compared to an organisation without policy, now that would be a *very* interesting and important study to conduct.

The conversation was interesting, fun, a little frustrating and in hindsight much too easy to be misinterpreted. I wonder if those are some of the key characteristics of Twitter itself.

 

Security Analytics Beyond Cyber

I presented at 44con 2014 on moving security analytics on from network defense and rapid response towards supporting data-driven and evidence-driven security management, my presentation is on slideshare below:

A video of the talk is for sale from 44con here:

44CON 2014 Conference Videos – Trailer from 44CON on Vimeo.

Twitter RSS