I’m back from the ISSA conference in Baltimore. Conferences are a great place to test out ideas to find out which ones stand up to scrutiny. I was giving my “Death by a Thousand Facts” presentation (otherwise known as the We’ve Got It All Wrong Roadshow) when Marcus Ranum pointed out a problem with my application of the term “learned helplessness”.
Learned Helplessness is a concept used to describe the effect when animals essentially “give up” and consign themselves to negative consequences. In a famous series of experiments, Martin Seligman put dogs in pens with a low wall and ran an electric current through the floor to produce an unpleasant sensation. The dogs which had not encountered the shocks before jumped over the wall to escape the sensation. Surprisingly, the dogs which had previously been exposed to shocks which they hadn’t been able to escape essentially “gave up” and lay down in the pen.
Marcus pointed out that in subsequent experiments on learned helplessness with higher order mammals the helplessness effect was found to be significantly reduced. This is a good point. So a higher order mammal would simply take action to avoid an unpleasant stimuli by simply climbing over a wall or leaving a room, no matter what their previous experiences with other rooms.
I’ve been thinking about how this applies in an information security context. Why would a higher order mammal give up on taking action to avoid something unpleasant? Maybe it has something to do with complexity. Seligman’s experiments offered relatively simple means of escaping the unpleasant sensation. But what if you put a combination lock on a door? Or a needed to read a 100 page instruction manual to escape? Or some other situation which required time and concentration? Would there not be a point at which a person would give up even though they had a supposedly “better” option available to them? I’m speculating here, but what if learned helplessness in humans is simply a form of poor self-efficacy or response efficacy? That is, when someone has a lack of confidence that they can perform a risk mitigating action successfully or that the action if successfully performed will make a significant difference?
If not Learned Helplessness, how else do we explain users having attitudes such as “viruses will get me no matter what I do so I don’t bother with anti-virus software”?
Very interesting post – thanks.
It may be possible that in humans this effect is no different from in certain other mammals in that it is about the learning aspect. I really like the analogies you have used here. It is so true in my experience (there you go – the word “experience”) that if the solution is too complex then the helplessnesss is the unhappy option. The psychology seems also to be one of denial and even criticism. If you are a vendor of a really cool, yet complex solution not only will your solution be rejected but suffer further because you have ‘insulted’ the potential user by challenging them where they don’t want to be challenged. Hence you will foster a negative PR.
Nowhere is the learned helplessness more rife than in risk analysis. Just like the Weather, most customers want risk to be calculated and presented using the most simple methods. But we all realise that, again like the weather, there are many complexities involved in risk determination. This is where your explanation of learned helplessness kicks in!