I believe the security profession is coming close to an inflection point. The growing dependence on technology in our increasingly digital societies, the systemic and personal harm that data breaches can cause and the real world consequences of failures in an IoT-driven physical environment mean that security failures are no longer just an interesting news item or a regulatory concern. They matter.
WannaCry and it’s impact on the NHS is a strong example of how lives can be harmed and disrupted as an unintended outcome of digital criminality.
All CISOs, or their analogues, tend to be one or two steps below a businesses executive committee in their respective organisation structures. Most will continuously analyse the organisation they’re in, compare it to the threat environment and develop some form of plan to close any obvious gaps and build sustainable processes to prevent such gaps opening up again. These programmes of work take money, resources, time and critically focus from the organisation to deliver. Sometimes these are fully funded, often they are part funded and occasionally they are thrown out in their entirety by the business executive committees or their nominated lead in the area. Many CISOs only get one shot at this, two if they’re lucky. Often after the first part funded transformation process they settle for incremental improvements within a resource envelope defined annually by members of the executive committee.
Then Equifax happens.
A CISO inside a resource envelope is either grossly negligent or more likely makes a bad prioritisation call amongst few good choices. Then has to ‘retire’. It is entirely possible that CISO was doing a great job right up until they got caught out. It is possible that no CISO could have done a better job within the envelope she was operating.
In a previous role I was involved in a number of post-breach reviews of the state of security where Boards of Directors were seeking to identify if their senior and executive management had been negligent. In every case we found failings in security management as a result of lack of resources but critically they were not unusual compared to firms we had conducted security reviews on. In each case the lawyers determined that by following common practice (if not truly good practice) there was a harder job to prove negligence. That is telling on the state of security management.
While this particular cases seems to be more investigations than security it seems that everyone, even the SEC, makes prioritisation calls that in hindsight seem to verge on recklessness.
If delivering the best possible outcome within external constraints and always being at risk of a major breach is the best we can achieve should we really expect CISOs to try any harder? Are CISOs bureaucrats who shrug their shoulders at a breach and move on to the next employer? CISOs collectively settled for the best they could get rather than call out the limits of what they’ve been given?
Should CISOs follow the example of Alex Stamos and leave organisations that don’t agree with their professional opinion on security needs? Without knowing any detail of the case I posit that Alex had little choice apart from leave as there was no other mechanism by which a CEO could be effectively challenged without protection for him.I think he likely took the best worst option available to him but I don’t think a wave of mass resignations of frustrated CISOs is likely to improve the situation.
Does society expect CISOs to be like doctors who call out bad management by their hospitals or pilots who refuse to fly planes they don’t believe are safe? A more recent and instructive example is the definition of a Data Protection Officer under the EU General Data Protection Regulation (GDPR) who must be both independent of management and reporting to the highest levels of an organisation to be able to express dissent. Is that something that should be defined for CISOs as well?
- Pilots: “a (Pilot in Command) PIC is responsible for the overall safety of his flight, including passengers, crewmembers, cargo and the aircraft, and for making sure the flight is in compliance with all applicable regulations……. the PIC cannot initiate an aircraft operation if he knows that certain safety regulations, including flight attendant duty and rest rules, would be violated.” FAA PIC Responsbility [PDF]
- Doctors: “All doctors have a duty to raise concerns where they believe that patient safety or care is being compromised by the practice of colleagues or the systems, policies and procedures in the organisations in which they work.” GMC Good Medical Practice
- Data Protection Officers “DPOs, ‘whether or not they are an employee of the controller, should be in a position to perform their duties and tasks in an independent manner’” and ” the DPO ‘shall directly report to the highest management level of the controller or the processor’” WP29 Guidance on DPOs
Other professions such as engineers are caught in the general “fitness for purpose” and “duty to warn” legal liabilities of any profession. There is also have legally-defined whistle blower protection in both private and public sectors. In theory these could also apply to CISOs although to my knowledge they have never been applied in this way. Its important to balance both accountability/liability (to encourage challenge) and protection (to discourage harm to the individual).
Should CISOs have a higher calling that ultimately trumps their relationship with employers on behalf of customers, shareholders or society as a whole? Is cyber security now so critical to our society that we should formally and legally define the expectations, accountabilities, responsibilities, liabilities and protections for CISOs?
I’m keen to pursue this line of thinking and have a debate about the future ‘professionalisation’ of security beyond the ethics and certificates focus we currently have.