When it comes to security awareness, there’s no such thing as a blank canvas. Your audience will already have pre-conceived notions about your topic. The language, tone and media you use will invoke associations in people’s mind, both helpful and unhelpful. These associations will influence how people view the root causes, likelihood and potential outcomes.…Details
I’m back from the ISSA conference in Baltimore. Conferences are a great place to test out ideas to find out which ones stand up to scrutiny. I was giving my “Death by a Thousand Facts” presentation (otherwise known as the We’ve Got It All Wrong Roadshow) when Marcus Ranum pointed out a problem with my application of the term “learned helplessness”.
Learned Helplessness is a concept used to describe the effect when animals essentially “give up” and consign themselves to negative consequences. In a famous series of experiments, Martin Seligman put dogs in pens with a low wall and ran an electric current through the floor to produce an unpleasant sensation. The dogs which had not encountered the shocks before jumped over the wall to escape the sensation. Surprisingly, the dogs which had previously been exposed to shocks which they hadn’t been able to escape essentially “gave up” and lay down in the pen.Details
The landing gear light indicated a problem. The captain, first officer and flight engineer of Eastern Air Lines Flight 401 tried to figure out what was wrong. They removed the light assembly and the flight engineer left his position to go to the avionics bay and investigate. They were so preoccupied with a burnt out…Details
I’m presenting for the ISSA web conference on security awareness on 22nd May.
Registrations are available here.
It’s an exciting line up and i’m really excited to see other speakers referencing mental models. One day we’ll look back on the old “data dump” approach of security awareness and wonder what got into us.
During the course of World War Two in the Pacific there were numerous primitive cultures on remote islands that came into contact with Westerners for the first time. Islanders were particularly impressed with the cargo that the visitors brought with them. At the conclusion of World War Two most of the visitors left and the cargo stopped arriving. Across multiple islands separated by thousands of miles a strange phenomenon occurred. Primitive cultures attempted to invite new cargo by imitating the conditions of what was happening when the cargo was arriving. They cleared spaces for aircraft landing strips and “controllers” dressed up with vines for wires and sticks for microphones. Bizarre ritualised behaviour developed around the use of artefacts like uniforms and insignias. “Cargo Cult” behaviour was a phrase coined by the scientist Richard Feynman to explain activity that occurs where appearances are superficially imitated. A result is pursued without actually understanding the underlying mechanisms of cause and effect. Pre-requisites are mistaken for causation. The pattern across so many independent island cultures suggests that this confusion is part of human nature. A good causation parody you may have heard of is a lack of pirates causes global warming.Details
Any endeavour is made doubly difficult when pursued with a lack of metrics and without a clear understanding of cause and effect. When stumbling in the dark, facts are the flashlight of comprehension, illuminating the way forward when the path is unclear. Information security is often required to function in the dark with little in the way of facts to guide us. We hear noises and bump into things but can never be certain if we’re going in the right direction.
When security fails, how do we know? While failures of integrity and availability are obvious, failures of confidentiality can be silent and insidious. Some actors such as LulzSec boast about their exploits and derive their benefits from the resulting publicity. Other actors quietly go about their ‘business’ and organisations may not realise they’ve been breached. Often, even when we do discover failures of confidentiality the organisational interest is to bury it. As a result, our profession is rich in rumours but poor in facts which make it difficult when trying to understand the effectiveness of security controls.Details
How frustrating is it when you point out the risks to people and they just don’t listen? Every day around the world there are millions of people who smoke, drive too fast and click on strange emails, even though they’ve been repeatedly told about the dangers. They are ‘risk aware’ in the technical sense of the word and yet their behaviour continues. This is a big problem since the mainstream approach to security awareness assumes that all that’s needed to achieve behavioural change is an understanding of the risks. Traditionally when encountering non-compliant behaviour, we security technocrats reiterate the facts and increase the threat of sanctions. But, there is another way.
Luckily for us this is a problem that safety risk communicators have been grappling with for decades. The safety risk communications field has a number of explanatory frameworks to predict how people will react to risk communications. One of the most interesting models to arise is the Extended Parallel Processing Model (EPPM) which seeks to explain why people fail to take action once aware of a threat. This is a goldmine for security professionals looking to apply a more structured, formal approach for promoting behavioural change.
We spend a lot of time talking about how to raise security awareness. We fill entire books, columns and conferences with it. However, anything that can go up must also go down. How about we turn the phrase on its head and ask what lowers security awareness? Just as there are behaviours that raise security awareness there are also some that lower security awareness. But what can we do about it? Name and shame was an important step in getting software vendors to deal with security vulnerabilities in their products. We should be equally critical when human vulnerabilities are created through the promotion of unsafe attitudes and behaviours. In this column I’m going to name and shame particularly egregious examples which I think reduces security awareness.Details
You wouldn’t know it by looking at it, but the information security awareness industry is in crisis. Humans are increasingly seen as the weak link in information security defences and human factors are increasing in prominence as a preferred exploit. Time after time we’re seeing expensive technical solutions bypassed by a simple call to the helpdesk or someone just asking users for their password. A cynic might say that’s because mistakes are inevitable when humans are involved. However, have we made our best attempt at managing human information security risks? In a series of columns about awareness and risk communications we’ll be taking a fresh look at the ways we attempt to manage human risks.
Technical information security solutions have advanced in leaps and bounds over the last two decades. We now have real time anti-virus, local firewalls and automated patching. It’s a far cry from the old days when we had to remember to load anti-virus manually once we started our computer. By comparison, human security management remains largely unchanged. We create information security policies and publish them on intranets. We hold mandatory training sessions. If the problem is getting worse then what is the solution? More policies? More mandatory training? Or, is there a fundamental problem in how security professionals are approaching the problem? Remind me again what the problem is we’re trying to solve? Our implicit assumption seems to be that the cause of insecure behaviour is a “lack of facts” known by an audience. Hence we distribute information in the hope the behaviour improves. But what if people have heard our message before and that didn’t fix it? Telling people again what they have likely heard before can only have a marginal return at best.Details
One of the small mercies of being a security consultant is that I’m usually spared the ordeal of attending information security induction sessions. Recently however I was asked to review the induction process for a European organisation. It was classic death by PowerPoint. It included organisational charts of the security function, strategic plans for ISO certification and pages and pages of security policy requirements. The conclusion of the session was a quiz on facts from the security policy.
Why do we do this? Why do we make people’s first contact with information security an ordeal for insomniacs? Consider that in people’s first week at a new job they’re usually nervous and on edge. Accompanying this will be elevated levels of adrenaline and cortisol (a stress hormone) which is not conducive for learning. In some ways we’ve picked the worst week to deliver training.
What is it that we’re trying to achieve with induction sessions? Is there a benefit to users being able to describe the organisational structure of the security department? Surely they would only need to know how to contact the security department in the event of an incident? What benefit is there for users knowing the ISO certification strategy? They might be things we want to tell them, but do they care? We seem to make the mistake as technical experts by selecting the information we want to tell people, not the information people need to know or are disposed to listening to.Details