ISSA Security Awareness Column November 2012 – Why Do People Ignore Risk Advice?

How frustrating is it when you point out the risks to people and they just don’t listen? Every day around the world there are millions of people who smoke, drive too fast and click on strange emails, even though they’ve been repeatedly told about the dangers. They are ‘risk aware’ in the technical sense of the word and yet their behaviour continues. This is a big problem since the mainstream approach to security awareness assumes that all that’s needed to achieve behavioural change is an understanding of the risks. Traditionally when encountering non-compliant behaviour, we security technocrats reiterate the facts and increase the threat of sanctions. But, there is another way.

Luckily for us this is a problem that safety risk communicators have been grappling with for decades. The safety risk communications field has a number of explanatory frameworks to predict how people will react to risk communications. One of the most interesting models to arise is the Extended Parallel Processing Model (EPPM) which seeks to explain why people fail to take action once aware of a threat. This is a goldmine for security professionals looking to apply a more structured, formal approach for promoting behavioural change.

RSA Europe 2012 Security Awareness Debate

I’m really looking forward to RSA Europe 2012 next week where I’ll be taking part in a debate about whether or not organisations should train their staff in security awareness. It is being organised by Acumin and the RANT community. Participating with me will be: Christian Toon, European Head of Information Risk, Iron Mountain Europe Thom Langford, Director Global Security…

ISSA Security Awareness Column October 2012 – Learning From Safety Risk Communications

Any endeavour is made doubly difficult when pursued with a lack of metrics and without a clear understanding of cause and effect. When stumbling in the dark, facts are the flashlight of comprehension, illuminating the way forward when the path is unclear. Information security is often required to function in the dark with little in the way of facts to guide us. We hear noises and bump into things but can never be certain if we’re going in the right direction.

When security fails, how do we know? While failures of integrity and availability are obvious, failures of confidentiality can be silent and insidious. Some actors such as LulzSec boast about their exploits and derive their benefits from the resulting publicity. Other actors quietly go about their ‘business’ and organisations may not realise they’ve been breached. Often, even when we do discover failures of confidentiality the organisational interest is to bury it. As a result, our profession is rich in rumours but poor in facts which make it difficult when trying to understand the effectiveness of security controls.

ISSA Security Awareness Column September 2012 – Cargo Cult Security

During the course of World War Two in the Pacific there were numerous primitive cultures on remote islands that came into contact with Westerners for the first time. Islanders were particularly impressed with the cargo that the visitors brought with them. At the conclusion of World War Two most of the visitors left and the cargo stopped arriving. Across multiple islands separated by thousands of miles a strange phenomenon occurred. Primitive cultures attempted to invite new cargo by imitating the conditions of what was happening when the cargo was arriving. They cleared spaces for aircraft landing strips and “controllers” dressed up with vines for wires and sticks for microphones. Bizarre ritualised behaviour developed around the use of artefacts like uniforms and insignias. “Cargo Cult” behaviour was a phrase coined by the scientist Richard Feynman to explain activity that occurs where appearances are superficially imitated. A result is pursued without actually understanding the underlying mechanisms of cause and effect. Pre-requisites are mistaken for causation. The pattern across so many independent island cultures suggests that this confusion is part of human nature. A good causation parody you may have heard of is a lack of pirates causes global warming.

ISSA Security Awareness Column August 2012 – Security Satisficing

What if much of our security advice to users was a waste of their time? What if some of it actually made users worse off? These are bold words but stay with me and let’s see where this goes. There will be some maths on the journey but it will be worth it I promise. Let’s look at passwords as an example. Many thousands of pages of security policy have been generated on creating strong passwords. It’s one of the most common subjects for security awareness. More letters, more numbers, make it longer and put a special character in it. Actually, most passwords don’t need to be strong, they just need to be difficult to guess which isn’t the same thing. Cormac Herley points out that password strength no longer has the critical role in security that it used to. It’s largely irrelevant since most systems now control the rate of password guessing attempts. For example only allowing five attempts every 30 minutes. In this scenario, the difference between 7 character and 8 character passwords is negligible if the system limits a brute force attack to 240 attempts per day. Modern authentication systems are much more likely to be compromised by password database disclosures, password re-use and key-loggers. Complexity does not assist with managing any of these threats. For years we’ve been focused on complexity and as a result users come up with combinations like “Password1” which meet our complexity rules but don’t effectively mitigate their risks. We need to change. We need to stop talking about password complexity and start talking about password commonality. Potentially, we’re doing more harm than good by occupying valuable (and limited) attention spans with topics of marginal return. The risks have changed and our risk communication needs to reflect that.

ISSA Security Awareness Column July 2012 – Security Induction Sessions

One of the small mercies of being a security consultant is that I’m usually spared the ordeal of attending information security induction sessions. Recently however I was asked to review the induction process for a European organisation. It was classic death by PowerPoint. It included organisational charts of the security function, strategic plans for ISO certification and pages and pages of security policy requirements. The conclusion of the session was a quiz on facts from the security policy.

Why do we do this? Why do we make people’s first contact with information security an ordeal for insomniacs? Consider that in people’s first week at a new job they’re usually nervous and on edge. Accompanying this will be elevated levels of adrenaline and cortisol (a stress hormone) which is not conducive for learning. In some ways we’ve picked the worst week to deliver training.

What is it that we’re trying to achieve with induction sessions? Is there a benefit to users being able to describe the organisational structure of the security department? Surely they would only need to know how to contact the security department in the event of an incident? What benefit is there for users knowing the ISO certification strategy? They might be things we want to tell them, but do they care? We seem to make the mistake as technical experts by selecting the information we want to tell people, not the information people need to know or are disposed to listening to.

ISSA Security Awareness Column June 2012 – Security Awareness in Crisis

You wouldn’t know it by looking at it, but the information security awareness industry is in crisis. Humans are increasingly seen as the weak link in information security defences and human factors are increasing in prominence as a preferred exploit. Time after time we’re seeing expensive technical solutions bypassed by a simple call to the helpdesk or someone just asking users for their password. A cynic might say that’s because mistakes are inevitable when humans are involved. However, have we made our best attempt at managing human information security risks? In a series of columns about awareness and risk communications we’ll be taking a fresh look at the ways we attempt to manage human risks.

Technical information security solutions have advanced in leaps and bounds over the last two decades. We now have real time anti-virus, local firewalls and automated patching. It’s a far cry from the old days when we had to remember to load anti-virus manually once we started our computer. By comparison, human security management remains largely unchanged. We create information security policies and publish them on intranets. We hold mandatory training sessions. If the problem is getting worse then what is the solution? More policies? More mandatory training? Or, is there a fundamental problem in how security professionals are approaching the problem? Remind me again what the problem is we’re trying to solve? Our implicit assumption seems to be that the cause of insecure behaviour is a “lack of facts” known by an audience. Hence we distribute information in the hope the behaviour improves. But what if people have heard our message before and that didn’t fix it? Telling people again what they have likely heard before can only have a marginal return at best.

Definition of Security Awareness

I’ve studied it for years, I’ve delivered it and I’ve even sat through it but I’m still not really sure what “it” is.

We talk about raising “security awareness” but what does that actually mean? The dictionary definitions I’ve seen commonly refer to awareness as a state of knowledge about risk. Thousands of articles and books have been written on increasing security awareness but very little time has been spent trying to define it.

The ISF Standard of Good Practice defines security awareness as “the extent to which staff understand the importance of information security, the level of security required by the organisation and their individual security responsibilities.” This seems like a reasonable definition but note that there is no behavioural component. People can (and do!) continue with unsafe behaviour despite their knowledge of the risks. Empirical evidence from outside of information security tells us that just knowing about a risk isn’t enough. Consider smokers and people who drive without using a seat belt. They’re surely all “aware” of the risks but somehow their behaviour continues.