I’m back from the ISSA conference in Baltimore. Conferences are a great place to test out ideas to find out which ones stand up to scrutiny. I was giving my “Death by a Thousand Facts” presentation (otherwise known as the We’ve Got It All Wrong Roadshow) when Marcus Ranum pointed out a problem with my application of the term “learned helplessness”.
Learned Helplessness is a concept used to describe the effect when animals essentially “give up” and consign themselves to negative consequences. In a famous series of experiments, Martin Seligman put dogs in pens with a low wall and ran an electric current through the floor to produce an unpleasant sensation. The dogs which had not encountered the shocks before jumped over the wall to escape the sensation. Surprisingly, the dogs which had previously been exposed to shocks which they hadn’t been able to escape essentially “gave up” and lay down in the pen.
Marcus pointed out that in subsequent experiments on learned helplessness with higher order mammals the helplessness effect was found to be significantly reduced. This is a good point. So a higher order mammal would simply take action to avoid an unpleasant stimuli by simply climbing over a wall or leaving a room, no matter what their previous experiences with other rooms.
I’ve been thinking about how this applies in an information security context. Why would a higher order mammal give up on taking action to avoid something unpleasant? Maybe it has something to do with complexity. Seligman’s experiments offered relatively simple means of escaping the unpleasant sensation. But what if you put a combination lock on a door? Or a needed to read a 100 page instruction manual to escape? Or some other situation which required time and concentration? Would there not be a point at which a person would give up even though they had a supposedly “better” option available to them? I’m speculating here, but what if learned helplessness in humans is simply a form of poor self-efficacy or response efficacy? That is, when someone has a lack of confidence that they can perform a risk mitigating action successfully or that the action if successfully performed will make a significant difference?
If not Learned Helplessness, how else do we explain users having attitudes such as “viruses will get me no matter what I do so I don’t bother with anti-virus software”?