
In our efforts to promote secure behaviour, our task is often made more difficult by the fact that often the people we need to influence are often not the same people who would suffer in the event of a security breach. Typically, the people who would suffer most in the event of a breach are the shareholders, data subjects or senior management. Employees and contractors don’t necessarily have the same level of care. Sometimes there are circumstances where the person managing the risk is the same person that would suffer, for example employee owned companies, but this is relatively rare.
In economics, ‘moral hazard’ is a term used to describe when one party decides how much risk to take and the burden of those risks largely falls on another party. For example, an employee deciding what level of caution to take when handling customer data. Every day around the world, junior staff make decisions about how to move, store or process sensitive data. The problem of supervising these junior staff members and the gap between what managers want verses what staff do is sometimes called the Principal Agent Problem. Should people use encryption? Should they email it home to work on over the weekend or not? We can tell them what to do but the million dollar question for us is how do we get them to care?
We try to train staff so that they understand the impacts of their actions, are competent to perform their job and we attempt to motivate them by combinations of the carrot and the stick. We largely rely on the stick and the most common approach is the threat of disciplinary action and the risk of being fired. However, in practice there are a large number of barriers to this being an effective sanction. Firstly, not all organisations have the will to fire an employee over a security incident. I’m not saying this is right or wrong, just pointing it out. Secondly, not all security events are discovered. Groups like Anonymous seek to publicise their security intrusions. Other groups do not and avoid any kind of publicity that might alert the target that their systems or data have been compromised. Then, even when a security incident is discovered it’s not always possible to attribute it to an individual. Lastly, delays in the breach being discovered means that the person who caused or allowed it is no longer with the company. Some breaches exist for months or years before being discovered and with workforces becoming more mobile the chances of holding someone responsible not as great as they once were.
The other dimension to moral hazard is outside the organisation rather than within. There is another concept in economics of ‘Externality’ which is when organisations transfer the costs of doing business on to other third parties. Pollution is a good example where the burden of a company doing business falls on neighbouring residents who suffer decreased property values and heath problems. To manage the issue of externality in society we use regulations. Regulations help push the costs back from society and back on to the entity that caused it. For example by setting maximum pollution limits and processes by which affected parties can claim compensation. The security field has its own answer to externality in the form of PCI DSS, HIPPA and the EU Data Protection directives. In effect, these are mechanisms which push back the costs of security breaches to the source. PCI DSS for example includes mechanisms to charge organisations for the costs involved in replacing customer credit cards.
There are strong arguments to be made that as low as we might think security investment is, it would be even lower without government regulation. Taking these arguments to a logical conclusion, perhaps we need government mandated standards for security awareness and training. I’m not just talking about a cosmetic makeover of NIST 800-50 but a comprehensive standard covering competencies and culture. Something that would help mitigate the moral hazard effect and get employees to care about the risks they take.
Other news: SANS are holding security awareness conferences in London (July 10th) and in Philadelphia (August 19th). The SANS Securing The Human (STH) community is a fantastic resource for anyone interested in security awareness and this is the first time that the community has been brought together for a live event. There will be some fantastic speakers and it’s well worth attending if you can make it.