During the course of World War Two in the Pacific there were numerous primitive cultures on remote islands that came into contact with Westerners for the first time. Islanders were particularly impressed with the cargo that the visitors brought with them. At the conclusion of World War Two most of the visitors left and the cargo stopped arriving. Across multiple islands separated by thousands of miles a strange phenomenon occurred. Primitive cultures attempted to invite new cargo by imitating the conditions of what was happening when the cargo was arriving. They cleared spaces for aircraft landing strips and “controllers” dressed up with vines for wires and sticks for microphones. Bizarre ritualised behaviour developed around the use of artefacts like uniforms and insignias. “Cargo Cult” behaviour was a phrase coined by the scientist Richard Feynman to explain activity that occurs where appearances are superficially imitated. A result is pursued without actually understanding the underlying mechanisms of cause and effect. Pre-requisites are mistaken for causation. The pattern across so many independent island cultures suggests that this confusion is part of human nature. A good causation parody you may have heard of is a lack of pirates causes global warming.
Ritualistic activities in support of information security awareness are common. For example the distribution of posters and mousemat slogans are encountered without any real understanding of how such activities were going to achieve an objective, or even what the objective specifically was. They might contain sensible security messages, but are they addressing the cause of insecure behaviour? There might be organisations with security mousemats that have improved their security. But were the mousemats the cause? Or were they just a correlation with other activities occurring in the organisation? What if the causations are more intangible, such as senior stakeholder support and ownership of security initiatives within the organisation?
To measure cause and effect we need to measure outcomes. Some security outcomes, especially a loss of confidentiality, can be difficult to detect and therefore difficult to measure. So instead we use surrogate outcomes that we expect are connected. In fact, much of what we do in managing security is based on surrogate outcomes. Have users read the policy? Have users been trained on encryption? We expect staff who have had security training to act in a more secure manner. But is this a reliable surrogate outcome? The problem is in knowing the strength of a correlation. Sometimes our instincts can be wrong. There are numerous examples of how surrogate outcomes once tested are shown to fail or even be counterproductive. For example, blood pressure is a factor in heart attacks. Therefore, lowering blood pressure should be a surrogate outcome in preventing illness. Doxazosin was a drug which lowered blood pressure and the expectation amongst the medical community was that it would therefore prevent heart attacks. When this assumption was tested in a large scale trial it turned out that Doxazosin actually increased people’s risk of strokes and cardiovascular problems. Driver training reduces accidents right? Wrong. Or at least it’s not that simple. Check out Miles Edmundson’s fantastic presentation on the Psychology of Risk. He explains how the National Highway Traffic Safety Administration inAtlanta ran one of the largest and thorough driver safety programmes ever attempted which produced some alarming implications for risk management practitioners. Three groups of drivers were put through the programme. The first group was a control group and was subject to no intervention. The second group had some simulator time and instructor sessions. The third group had extensive training including 32 hours of classroom instruction, 16 hours of simulator training, and 3 hours emergency vehicle training. Surprisingly, it was the third group that went on to have the highest rate of accidents. Although the driving competency of the third group had most likely improved as a result of the training, there was some other ingredient which had changed to negate the benefit expected from an increase in competency. Perhaps the drivers became less afraid of a crash and adjusted their behaviour accordingly? Perhaps they overrated their new skill in driving? Maybe they were all women? I’m kidding. Regardless of the actual cause, this driving example it shows how sometimes our expectations of cause and effect can be mistaken. It also suggests that other factors are important beyond just technical competency.
What outcomes do you track in your organisation to measure the level of security? Have you found any interesting correlations between outcomes?
Published in the September 2012 ISSA Security Journal