ISSA Security Awareness Column July 2012 – Security Induction Sessions

One of the small mercies of being a security consultant is that I’m usually spared the ordeal of attending information security induction sessions. Recently however I was asked to review the induction process for a European organisation. It was classic death by PowerPoint. It included organisational charts of the security function, strategic plans for ISO certification and pages and pages of security policy requirements. The conclusion of the session was a quiz on facts from the security policy.

Why do we do this? Why do we make people’s first contact with information security an ordeal for insomniacs? Consider that in people’s first week at a new job they’re usually nervous and on edge. Accompanying this will be elevated levels of adrenaline and cortisol (a stress hormone) which is not conducive for learning. In some ways we’ve picked the worst week to deliver training.

What is it that we’re trying to achieve with induction sessions? Is there a benefit to users being able to describe the organisational structure of the security department? Surely they would only need to know how to contact the security department in the event of an incident? What benefit is there for users knowing the ISO certification strategy? They might be things we want to tell them, but do they care? We seem to make the mistake as technical experts by selecting the information we want to tell people, not the information people need to know or are disposed to listening to.

ISSA Security Awareness Column June 2012 – Security Awareness in Crisis

You wouldn’t know it by looking at it, but the information security awareness industry is in crisis. Humans are increasingly seen as the weak link in information security defences and human factors are increasing in prominence as a preferred exploit. Time after time we’re seeing expensive technical solutions bypassed by a simple call to the helpdesk or someone just asking users for their password. A cynic might say that’s because mistakes are inevitable when humans are involved. However, have we made our best attempt at managing human information security risks? In a series of columns about awareness and risk communications we’ll be taking a fresh look at the ways we attempt to manage human risks.

Technical information security solutions have advanced in leaps and bounds over the last two decades. We now have real time anti-virus, local firewalls and automated patching. It’s a far cry from the old days when we had to remember to load anti-virus manually once we started our computer. By comparison, human security management remains largely unchanged. We create information security policies and publish them on intranets. We hold mandatory training sessions. If the problem is getting worse then what is the solution? More policies? More mandatory training? Or, is there a fundamental problem in how security professionals are approaching the problem? Remind me again what the problem is we’re trying to solve? Our implicit assumption seems to be that the cause of insecure behaviour is a “lack of facts” known by an audience. Hence we distribute information in the hope the behaviour improves. But what if people have heard our message before and that didn’t fix it? Telling people again what they have likely heard before can only have a marginal return at best.

Definition of Security Awareness

I’ve studied it for years, I’ve delivered it and I’ve even sat through it but I’m still not really sure what “it” is.

We talk about raising “security awareness” but what does that actually mean? The dictionary definitions I’ve seen commonly refer to awareness as a state of knowledge about risk. Thousands of articles and books have been written on increasing security awareness but very little time has been spent trying to define it.

The ISF Standard of Good Practice defines security awareness as “the extent to which staff understand the importance of information security, the level of security required by the organisation and their individual security responsibilities.” This seems like a reasonable definition but note that there is no behavioural component. People can (and do!) continue with unsafe behaviour despite their knowledge of the risks. Empirical evidence from outside of information security tells us that just knowing about a risk isn’t enough. Consider smokers and people who drive without using a seat belt. They’re surely all “aware” of the risks but somehow their behaviour continues.

Learned Helplessness

I’m back from the ISSA conference in Baltimore. Conferences are a great place to test out ideas to find out which ones stand up to scrutiny. I was giving my “Death by a Thousand Facts” presentation (otherwise known as the We’ve Got It All Wrong Roadshow) when Marcus Ranum pointed out a problem with my application of the term “learned helplessness”.

Learned Helplessness is a concept used to describe the effect when animals essentially “give up” and consign themselves to negative consequences. In a famous series of experiments, Martin Seligman put dogs in pens with a low wall and ran an electric current through the floor to produce an unpleasant sensation. The dogs which had not encountered the shocks before jumped over the wall to escape the sensation. Surprisingly, the dogs which had previously been exposed to shocks which they hadn’t been able to escape essentially “gave up” and lay down in the pen.

Organisational Culture and Compliance

Many of you will be familiar with the footage of Ian Tomlinson apparently being struck by a Metropolitan Police Officer in London on the day of the G20 protests. After the footage was aired, senior members of the Met Police were quick to promote the narrative of a “bad apple”. They pointed out that the Met Police is an organisation which includes some 50,000 people.

You have to have some sympathy for the police. They do a difficult job. The problem with the bad apple narrative is the video footage of the incident. Although the attack on Ian Tonlinson took place immediately in front of at least three other members of the Met Police, none of them appear concerned enough to go to the aid of Tomlinson. Neither are they seen to remonstrate with their colleague.

Death by a Thousand Facts: Criticising the Technocratic Approach to Information Security Awareness

Recently I co-authored a paper “Death by a Thousand Facts” with David Lacey for the HAISA conference where we explored the nature of how technical experts choose what content is included in risk communications. A copy of the proceedings is available here. Basically, mainstream information security awareness techniques are failing to evolve at the same…

Bounded Rationality

Are humans rational? When we see computer users to silly things which place themselves or their information at risk its easy to take a view that people are illogical. The problem is that logic can’t be examined separately from perception.

There is significant debate within psychology literature as to the extent to which humans can be described as rational. Rationality is sometimes described as the ability for individuals to select the “best” option when confronted with a set of choices. The best option is also referred to as a “value maximising” option when the most benefit is obtained for the least expenditure of resources or exposure to risk.

The problem is that people routinely fail to select a “value maximising” option and exhibit apparently illogical behaviour. Commonly, an option mathematically modelled as the best choice by the technical experts isn’t the choice chosen by information system users when responding to risk.