
Why do people ignore security warnings? Why do they pay attention to some advice but ignore others? Why are spammers and phishers apparently so good at getting people’s attention? Over the course of each day, we often receive dozens of warnings. We’re told that web sites are using untrusted certificates, that downloads might harm our computers and that scripts may be unsafe. We’re so used to these warnings that we hardly even notice them anymore. But what makes an effective warning message? Why do people stop and consider some messages but happily ignore others?
It can be a real challenge to get people to read and act on warning messages. After years of effort trying to educate people about risk you would think we would have a good understanding of how people’s risk attention works. That’s not the case. As technical experts in the area of information security, our focus has been largely on what steps people need to take to remain secure. In comparison, we have a very poor understanding of how people choose to listen to our advice or not. We have some basic narratives around ‘idiotic users’ and ‘learned helplessness’, but little in the way of empirical evidence.
Fortunately, a team of researchers at Cambridge University have done some excellent work recently to help us to understand why some messages are better at getting attention. They’ve set out to explain why people pay attention to fake security messages but ignore real warnings. The results are very interesting and there were two key findings that we should take notice of.
Firstly, generic warnings were found to be ineffective. Many of the common messages that we see on a regular basis have the same weaknesses. That is, they are vague in time and space, unclear as to whom, if anyone, would suffer an impact from the risk and what that impact would actually be. Saying that a file could ‘potentially harm your computer’, or that ‘this webpage contains content that will not delivered using a secure HTTPS connection, which could compromise the security of the entire webpage’ are two common examples. I wonder what our audiences think is meant by ‘compromise the security of the entire webpage’ and if it’s anything like what was intended.
Secondly, messages that made it clear who was giving the advice tended to be more effective. Why should people listen to a particular piece of advice? How do they judge that the source knows what they are talking about if they don’t know the origin? There’s a reason why phishers usually claim to be the Lending Manager from the State Bank of Nigeria rather than the cleaner at an African financial institution. Potentially, knowing where the information came from makes the information seem more trustworthy because it appears verifiable. Most people wouldn’t ever actually take the steps to verify but it may be a safety blanket to know that they could if they wanted to.
In many ways what professors Anderson and Modic have described is the process of habituation in higher order mammals. That is, the gradual process of learning which stimuli can be safely ignored. In the wild, it’s a waste of energy to jump and run away every time there’s a rustle in the grass. When nothing bad happens (or appears to happen) when you take a risk, then you learn that the risk is probably harmless. Users have learned the same thing from the thousands of times that they have clicked on Yes/Next/Continue and the sky hasn’t fallen. The sheer volume of warnings that people receive means that they have to filter the advice they are given. There is another argument to be made that many warnings are a waste of people’s time.
Looking for the inverse of these two findings help us to understand why spamming and phishing emails can be so effective. They usually feature an immediate call to action explicitly stating that something needs to be done right away. They also often feature specific examples of harm and are clear about whom the harm might impact.
While the work of professors Anderson and Modic was done primarily looking at the design of warning messages within information systems, the results have important implications for information security awareness campaigns. Is your campaign expressly relevant to the audience being targeted? Can they see that it is? Are the warnings direct and specific? Does it make clear who/what the source of the warning is and why people should pay attention to it? If not, you might find that your content is just background noise.
Published in the February 2014 edition of the ISSA International Journal.