What if much of our security advice to users was a waste of their time? What if some of it actually made users worse off? These are bold words but stay with me and let’s see where this goes. There will be some maths on the journey but it will be worth it I promise. Let’s look at passwords as an example. Many thousands of pages of security policy have been generated on creating strong passwords. It’s one of the most common subjects for security awareness. More letters, more numbers, make it longer and put a special character in it. Actually, most passwords don’t need to be strong, they just need to be difficult to guess which isn’t the same thing. Cormac Herley points out that password strength no longer has the critical role in security that it used to. It’s largely irrelevant since most systems now control the rate of password guessing attempts. For example only allowing five attempts every 30 minutes. In this scenario, the difference between 7 character and 8 character passwords is negligible if the system limits a brute force attack to 240 attempts per day. Modern authentication systems are much more likely to be compromised by password database disclosures, password re-use and key-loggers. Complexity does not assist with managing any of these threats. For years we’ve been focused on complexity and as a result users come up with combinations like “Password1” which meet our complexity rules but don’t effectively mitigate their risks. We need to change. We need to stop talking about password complexity and start talking about password commonality. Potentially, we’re doing more harm than good by occupying valuable (and limited) attention spans with topics of marginal return. The risks have changed and our risk communication needs to reflect that.
The best advice we can give users is to not choose common passwords that can be guessed by anyone else in a short number of guesses. This means avoiding common words, personal references or variations of the organisation name such as “OraganisationName123”. That’s it. No need to destroy the Amazon with a ten page password optimisation procedure.
We need to ask ourselves what the utility is of the information we’re providing to users. How are users going to use it for a benefit? It’s not enough for our advice to be technically correct. It needs also to be practical. “Satisficing” is a term coined by economist and psychologist Herbert Simon to explain how people learn to be comfortable with acceptable outcomes rather than optimal outcomes. Simon demonstrated that in many situations, there was far too much information that could realistically be processed by an individual in any meaningful way within acceptable time frames. The excessive investment of resources into optimising one decision would result in a reduction of resources available for other decisions. This constraint leads to a search for satisfactory solutions, rather than optimal ones. We need to stop aiming for mathematically perfect solutions and aim for adequate ones. If we decided topics according to risk wouldn’t we would spend the vast majority of our time concentrating on phishing?
A security manager recently expressed his frustration to me that staff at his organisation were allowing others to tail-gate through doors. I asked him how many physical security breaches it had caused. “None” was the reply. From a technical perspective, his position was correct and it was better for security if all staff used their swipe cards to go through doors individually. However, if the staff involved knew each other, what’s the harm? Apparently the staff involved had arrived at a workable, satisficing solution. In this situation, asking all staff to use doors one at a time is asking them to work slower and take more delays in the name of a largely hypothetical risk. For some high risk, regulated, government or military environments this is necessary. For other environments it’s a burden with a poor cost benefit. “Working to Rule” is a tactic sometimes adopted by employees during industrial disputes where employees studiously follow all instructions to the letter. Consider what would happen in many organisations if staff adopted a Working to Security Rule tactic and completely followed every single one of their security instructions. For some organisations with a large gap between the security policy and normal work practice this would be hugely disruptive. I know as information security professionals the last thing we would want to do is disrupt the [This page has been blocked by Information Security Policy].
Published in the August 2012 ISSA Security Journal