
You wouldn’t know it by looking at it, but the information security awareness industry is in crisis. Humans are increasingly seen as the weak link in information security defences and human factors are increasing in prominence as a preferred exploit. Time after time we’re seeing expensive technical solutions bypassed by a simple call to the helpdesk or someone just asking users for their password. A cynic might say that’s because mistakes are inevitable when humans are involved. However, have we made our best attempt at managing human information security risks? In a series of columns about awareness and risk communications we’ll be taking a fresh look at the ways we attempt to manage human risks.
Technical information security solutions have advanced in leaps and bounds over the last two decades. We now have real time anti-virus, local firewalls and automated patching. It’s a far cry from the old days when we had to remember to load anti-virus manually once we started our computer. By comparison, human security management remains largely unchanged. We create information security policies and publish them on intranets. We hold mandatory training sessions. If the problem is getting worse then what is the solution? More policies? More mandatory training? Or, is there a fundamental problem in how security professionals are approaching the problem? Remind me again what the problem is we’re trying to solve? Our implicit assumption seems to be that the cause of insecure behaviour is a “lack of facts” known by an audience. Hence we distribute information in the hope the behaviour improves. But what if people have heard our message before and that didn’t fix it? Telling people again what they have likely heard before can only have a marginal return at best.
The mainstream approach of teaching topics regardless of what audiences already know or perceive seems an extraordinarily wasteful approach of people’s time. Both ours and our audiences. Lance Spitzner from the SANS Securing the Human Program makes an interesting point about humans being just another operating system (OS). I think we could take his analogy even further. If we were asked to secure a Windows operating system we’d inspect it to see what security controls were missing. To suggest that we would fire patches at it blindly without knowing what was already installed would be ludicrous. But yet that’s exactly what we do with human operating systems. Where’s the awareness equivalent of the Microsoft Baseline Analyser?
To facilitate a more targeted approach we need to understand why people take the information security risks they do. To us as observers, some user behaviour may seem irrational. To the individuals concerned, their behaviour is a perfectly logical outcome given their perspectives and motivations. People don’t set out to be 100% secure. They have their own competing objectives such as wanting to avoid workplace conflict, to be trusted by their colleagues and leave work on time. We need to have a better understanding of user objectives, their mental model of how they perceive information security issues and what motivating factors they are subject to.
Rick Wash did a fantastic piece of research on security mental models and clearly demonstrated the value of understanding audience perspectives. Wash found that there was a common perspective held by American home computer users that the internet threat was mostly mischievous hackers. This fundamental misunderstanding about the nature of internet threats then influenced people’s attitudes to security behaviours such as patching and anti-virus. The audience had heard the advice about patching and anti-virus but their belief about the nature of the threat overrode the recommendations they had heard from the experts. The mistaken perception about the threat prevented them from acting on good advice. Reiterating general advice about patching and anti-virus is unlikely to help this audience. However, with a greater understanding of their perceptions in regards to the nature of the threat, the approach for this audience is now obvious.
For information security awareness techniques to improve in effectiveness and efficiency it’s clear that information security awareness content can’t be created from what we technical experts want to tell people, the contents of a technical standard or the prevailing best practice topics. It needs to come from the audience themselves that we seek to protect. If we don’t know someone’s starting point, how can we give them directions?
Published in the June 2012 ISSA Security Journal