Martin Luther King said ‘I have a dream’, not ‘I have a plan’
– Simon Sinek
Engaging end users using marketing, psychology and safety theory.
About Geordie Stewart
His award winning masters thesis at the Royal Holloway Information Security Group examined information security awareness from a fresh perspective as a marketing and communications challenge. In his regular speaking appearances at international information security conferences such as RSA, ISACA and ISSA he challenges conventional thinking on risk culture and communication.
In addition to senior security management roles in large UK organisations Geordie writes the security awareness column for the ISSA international journal.
During the course of World War Two in the Pacific there were numerous primitive cultures on remote islands that came into contact with Westerners for the first time. Islanders were particularly impressed with the cargo that the visitors brought with them. At the conclusion of World War Two most of the visitors left and the cargo stopped arriving. Across multiple islands separated by thousands of miles a strange phenomenon occurred. Primitive cultures attempted to invite new cargo by imitating the conditions of what was happening when the cargo was arriving. They cleared spaces for aircraft landing strips and “controllers” dressed up with vines for wires and sticks for microphones. Bizarre ritualised behaviour developed around the use of artefacts like uniforms and insignias. “Cargo Cult” behaviour was a phrase coined by the scientist Richard Feynman to explain activity that occurs where appearances are superficially imitated. A result is pursued without actually understanding the underlying mechanisms of cause and effect. Pre-requisites are mistaken for causation. The pattern across so many independent island cultures suggests that this confusion is part of human nature. A good causation parody you may have heard of is a lack of pirates causes global warming.Details
What if much of our security advice to users was a waste of their time? What if some of it actually made users worse off? These are bold words but stay with me and let’s see where this goes. There will be some maths on the journey but it will be worth it I promise. Let’s look at passwords as an example. Many thousands of pages of security policy have been generated on creating strong passwords. It’s one of the most common subjects for security awareness. More letters, more numbers, make it longer and put a special character in it. Actually, most passwords don’t need to be strong, they just need to be difficult to guess which isn’t the same thing. Cormac Herley points out that password strength no longer has the critical role in security that it used to. It’s largely irrelevant since most systems now control the rate of password guessing attempts. For example only allowing five attempts every 30 minutes. In this scenario, the difference between 7 character and 8 character passwords is negligible if the system limits a brute force attack to 240 attempts per day. Modern authentication systems are much more likely to be compromised by password database disclosures, password re-use and key-loggers. Complexity does not assist with managing any of these threats. For years we’ve been focused on complexity and as a result users come up with combinations like “Password1” which meet our complexity rules but don’t effectively mitigate their risks. We need to change. We need to stop talking about password complexity and start talking about password commonality. Potentially, we’re doing more harm than good by occupying valuable (and limited) attention spans with topics of marginal return. The risks have changed and our risk communication needs to reflect that.Details
I’ve contributed a posting on password strengths as an engineering problem rather than an an awareness problem on the SANS Securing The Human Blog. There’s a great quote from “Evil Dave” that sums up the problem rather well: “Through 20 years of effort, we’ve sucesfully trained everyone to use passwords that are hard for humans to remember, but…Details
One of the small mercies of being a security consultant is that I’m usually spared the ordeal of attending information security induction sessions. Recently however I was asked to review the induction process for a European organisation. It was classic death by PowerPoint. It included organisational charts of the security function, strategic plans for ISO certification and pages and pages of security policy requirements. The conclusion of the session was a quiz on facts from the security policy.
Why do we do this? Why do we make people’s first contact with information security an ordeal for insomniacs? Consider that in people’s first week at a new job they’re usually nervous and on edge. Accompanying this will be elevated levels of adrenaline and cortisol (a stress hormone) which is not conducive for learning. In some ways we’ve picked the worst week to deliver training.
What is it that we’re trying to achieve with induction sessions? Is there a benefit to users being able to describe the organisational structure of the security department? Surely they would only need to know how to contact the security department in the event of an incident? What benefit is there for users knowing the ISO certification strategy? They might be things we want to tell them, but do they care? We seem to make the mistake as technical experts by selecting the information we want to tell people, not the information people need to know or are disposed to listening to.Details
You wouldn’t know it by looking at it, but the information security awareness industry is in crisis. Humans are increasingly seen as the weak link in information security defences and human factors are increasing in prominence as a preferred exploit. Time after time we’re seeing expensive technical solutions bypassed by a simple call to the helpdesk or someone just asking users for their password. A cynic might say that’s because mistakes are inevitable when humans are involved. However, have we made our best attempt at managing human information security risks? In a series of columns about awareness and risk communications we’ll be taking a fresh look at the ways we attempt to manage human risks.
Technical information security solutions have advanced in leaps and bounds over the last two decades. We now have real time anti-virus, local firewalls and automated patching. It’s a far cry from the old days when we had to remember to load anti-virus manually once we started our computer. By comparison, human security management remains largely unchanged. We create information security policies and publish them on intranets. We hold mandatory training sessions. If the problem is getting worse then what is the solution? More policies? More mandatory training? Or, is there a fundamental problem in how security professionals are approaching the problem? Remind me again what the problem is we’re trying to solve? Our implicit assumption seems to be that the cause of insecure behaviour is a “lack of facts” known by an audience. Hence we distribute information in the hope the behaviour improves. But what if people have heard our message before and that didn’t fix it? Telling people again what they have likely heard before can only have a marginal return at best.Details
I’m presenting for the ISSA web conference on security awareness on 22nd May.
Registrations are available here.
It’s an exciting line up and i’m really excited to see other speakers referencing mental models. One day we’ll look back on the old “data dump” approach of security awareness and wonder what got into us.