Reward and Punishment

It was a children’s birthday party. He cried and whinged and pleaded with tears streaming down his face. For about 2 minutes his mother said no but eventually she pulled a chocolate biscuit from her bag and gave it to him before turning to me and saying “I just don’t know why he cries so much”. Operant Conditioning is a phrase coined by BF Skinner that many security awareness professionals may not have heard before. Broadly, it means that ‘behaviour is a function of its consequences’. If the consequence of behaviour is positive then there is a chance to increase the magnitude or likelihood the behaviour. Alternately, negative consequences have the opposite effect.

My young friend at the birthday party had been trained to whinge and cry because he had been regularly rewarded with a treat for doing so. Just as rewards and punishments influence children’s behaviour, they are also an important factor in governance, risk and compliance. However, there are interesting quirks of rewards and punishments that need to be understood by anyone trying to influence behaviour.

ISSA Security Awareness Column March 2013 – Lowering Security Awareness

We spend a lot of time talking about how to raise security awareness. We fill entire books, columns and conferences with it. However, anything that can go up must also go down. How about we turn the phrase on its head and ask what lowers security awareness? Just as there are behaviours that raise security awareness there are also some that lower security awareness. But what can we do about it? Name and shame was an important step in getting software vendors to deal with security vulnerabilities in their products. We should be equally critical when human vulnerabilities are created through the promotion of unsafe attitudes and behaviours. In this column I’m going to name and shame particularly egregious examples which I think reduces security awareness.

ISSA Security Awareness Column Feb 2013 – Innovation in Information Security Awareness

Here’s a trivia question for you – how did President George Washington die? No points for anyone who thought he died in battle, fell from a horse or was poisoned.  Actually, he had an infection and suffered massive blood loss. Why he suffered massive blood loss is fascinating. For thousands of years people were convinced that blood could become stale and that ‘bad humours’ could cause illness for which bloodletting was the solution. When Washington became sick, his staff did the natural thing at the time and bled him. When he didn’t improve his staff bled him some more. Then the doctor was called and when he arrived Washington was bled again. All told, Washington lost some 6 pints of blood in a 16 hour period. He had a severe infection to be sure, but it’s likely that the massive blood loss significantly contributed to his demise.

Sometimes, how we define a problem limits our ability to solve it. Innovation counts for nothing if the approach itself is the problem. Physicians focused on how to let blood more effectively for thousands of years. Elaborate rituals developed to define where on the body blood could be taken from to fix specific aliments. Contraptions such as scarificators were invented to help people administer their own bloodletting – you didn’t have to visit someone to get them to do it for you (ever wondered what the red on a barber’s pole stood for?).

ISSA Security Awareness Column Jan 2013 – Bad Apples in Big Barrels

There’s no denying that some people are impervious to our attempts at security awareness and refuse to listen to warnings or instructions. There is a temptation when things go wrong to label such people as ‘bad apples’. I think that this saying is overused. Originally, the expression ‘bad apple’ referred to a rotten apple in a barrel that would spoil the good apples. Usage of the phrase has changed and its now often used to explain failures of scale. The perception is that when there are many apples you have to expect some of them to be bad.

I often hear the phrase used when a governance failure is attributed to human mistakes. Frequently however, I think the phrase bad apple is a convenient cover for poor management where processes and procedures were badly designed or supervised. The bad apple narrative can suit prejudices of humans being a weak link and any narrative is more comforting than no narrative at all. However, bad apple narratives rarely withstand serious scrutiny.

ISSA Security Awareness Column December 2012 – Security Awareness Training Feedback Surveys

Whoever said that there’s no such thing as a stupid question, only a stupid answer, has probably never seen a feedback survey for security awareness training sessions. Questions such as “Did you learn anything?” and “Do you feel more secure?” are as common as they are idiotic. I guess its largely shaped by the motives of who is asking the question. The trainers involved are primarily interested in demonstrating that they are good trainers and questions are designed to elicit complimentary feedback. Feedback surveys are a great chance to obtain valuable feedback, but only if we’re asking the right questions.

In this column we’re going to look at training feedback surveys in more detail. Getting useful feedback from training sessions is challenging, but not impossible. For a start, you need to be aware of people’s biases. Surveys measure ‘declared preferences’ since they rely on people expressing their views. While easier to gather, declared preferences have inherent biases that need to be acknowledged and allowed for when interpreting the results. ‘Revealed preferences’ are what people actually do but measuring what people do accurately and efficiently can be difficult especially if people know they’re being observed. Here are some suggestions for allowing for people’s biases while obtaining reliable survey data.

ISSA Security Awareness Column November 2012 – Why Do People Ignore Risk Advice?

How frustrating is it when you point out the risks to people and they just don’t listen? Every day around the world there are millions of people who smoke, drive too fast and click on strange emails, even though they’ve been repeatedly told about the dangers. They are ‘risk aware’ in the technical sense of the word and yet their behaviour continues. This is a big problem since the mainstream approach to security awareness assumes that all that’s needed to achieve behavioural change is an understanding of the risks. Traditionally when encountering non-compliant behaviour, we security technocrats reiterate the facts and increase the threat of sanctions. But, there is another way.

Luckily for us this is a problem that safety risk communicators have been grappling with for decades. The safety risk communications field has a number of explanatory frameworks to predict how people will react to risk communications. One of the most interesting models to arise is the Extended Parallel Processing Model (EPPM) which seeks to explain why people fail to take action once aware of a threat. This is a goldmine for security professionals looking to apply a more structured, formal approach for promoting behavioural change.

RSA Europe 2012 Security Awareness Debate

I’m really looking forward to RSA Europe 2012 next week where I’ll be taking part in a debate about whether or not organisations should train their staff in security awareness. It is being organised by Acumin and the RANT community. Participating with me will be: Christian Toon, European Head of Information Risk, Iron Mountain Europe Thom Langford, Director Global Security…

ISSA Security Awareness Column October 2012 – Learning From Safety Risk Communications

Any endeavour is made doubly difficult when pursued with a lack of metrics and without a clear understanding of cause and effect. When stumbling in the dark, facts are the flashlight of comprehension, illuminating the way forward when the path is unclear. Information security is often required to function in the dark with little in the way of facts to guide us. We hear noises and bump into things but can never be certain if we’re going in the right direction.

When security fails, how do we know? While failures of integrity and availability are obvious, failures of confidentiality can be silent and insidious. Some actors such as LulzSec boast about their exploits and derive their benefits from the resulting publicity. Other actors quietly go about their ‘business’ and organisations may not realise they’ve been breached. Often, even when we do discover failures of confidentiality the organisational interest is to bury it. As a result, our profession is rich in rumours but poor in facts which make it difficult when trying to understand the effectiveness of security controls.

ISSA Security Awareness Column September 2012 – Cargo Cult Security

During the course of World War Two in the Pacific there were numerous primitive cultures on remote islands that came into contact with Westerners for the first time. Islanders were particularly impressed with the cargo that the visitors brought with them. At the conclusion of World War Two most of the visitors left and the cargo stopped arriving. Across multiple islands separated by thousands of miles a strange phenomenon occurred. Primitive cultures attempted to invite new cargo by imitating the conditions of what was happening when the cargo was arriving. They cleared spaces for aircraft landing strips and “controllers” dressed up with vines for wires and sticks for microphones. Bizarre ritualised behaviour developed around the use of artefacts like uniforms and insignias. “Cargo Cult” behaviour was a phrase coined by the scientist Richard Feynman to explain activity that occurs where appearances are superficially imitated. A result is pursued without actually understanding the underlying mechanisms of cause and effect. Pre-requisites are mistaken for causation. The pattern across so many independent island cultures suggests that this confusion is part of human nature. A good causation parody you may have heard of is a lack of pirates causes global warming.

ISSA Security Awareness Column August 2012 – Security Satisficing

What if much of our security advice to users was a waste of their time? What if some of it actually made users worse off? These are bold words but stay with me and let’s see where this goes. There will be some maths on the journey but it will be worth it I promise. Let’s look at passwords as an example. Many thousands of pages of security policy have been generated on creating strong passwords. It’s one of the most common subjects for security awareness. More letters, more numbers, make it longer and put a special character in it. Actually, most passwords don’t need to be strong, they just need to be difficult to guess which isn’t the same thing. Cormac Herley points out that password strength no longer has the critical role in security that it used to. It’s largely irrelevant since most systems now control the rate of password guessing attempts. For example only allowing five attempts every 30 minutes. In this scenario, the difference between 7 character and 8 character passwords is negligible if the system limits a brute force attack to 240 attempts per day. Modern authentication systems are much more likely to be compromised by password database disclosures, password re-use and key-loggers. Complexity does not assist with managing any of these threats. For years we’ve been focused on complexity and as a result users come up with combinations like “Password1” which meet our complexity rules but don’t effectively mitigate their risks. We need to change. We need to stop talking about password complexity and start talking about password commonality. Potentially, we’re doing more harm than good by occupying valuable (and limited) attention spans with topics of marginal return. The risks have changed and our risk communication needs to reflect that.