
It was a children’s birthday party. He cried and whinged and pleaded with tears streaming down his face. For about 2 minutes his mother said no but eventually she pulled a chocolate biscuit from her bag and gave it to him before turning to me and saying “I just don’t know why he cries so much”. Operant Conditioning is a phrase coined by BF Skinner that many security awareness professionals may not have heard before. Broadly, it means that ‘behaviour is a function of its consequences’. If the consequence of behaviour is positive then there is a chance to increase the magnitude or likelihood the behaviour. Alternately, negative consequences have the opposite effect.
My young friend at the birthday party had been trained to whinge and cry because he had been regularly rewarded with a treat for doing so. Just as rewards and punishments influence children’s behaviour, they are also an important factor in governance, risk and compliance. However, there are interesting quirks of rewards and punishments that need to be understood by anyone trying to influence behaviour.
First, some background. Operant conditioning works by either adding or removing something. For example, positive reinforcement is the addition of something desirable. Negative punishment is the opposite where something desirable is removed. For rewards or punishments to be effective, the individual needs to be able to perceive a link between the behaviour and the consequences. Using the four combinations of adding or removing a reward or punishment, Skinner identified several unexpected results:
Reward Asymmetry: Skinner found that rewards and punishments were not equal in terms of how they shaped behaviour. Punishments needed to be regularly and reliably applied to influence behaviour where as an occasional reward was enough to have a long lasting impact on behaviour. Research from Classical Criminology supports this view with the finding that punishments must be either certain or severe in order to have a significant impact on compliance. So what does this mean for Infosec Professionals? Basically, the use of rewards could be far more effective in shaping compliant behaviour. Many organisations get limited results from punishments because they’re not able to deliver a severe consequence (because of workplace culture or unions) or non-compliant behaviour cannot be reliably detected. Note that small monetary rewards have been shown to decrease motivation. It seems to detract from the pride and self-worth of the action.
Infosec rewards are too hard I hear some folks say. Well how about George the Penguin. He has his own website, twitter account and cult following. He’s regularly seen with the management of his company, including the CEO. When he hangs out with someone for a week at their desk its because that person did something awesome to help manage security risk. A visit from George costs nothing and is a powerful way of demonstrating management’s appreciation for secure behaviour.
The use of rewards and recognition for positive behaviour is common in the safety field. Punishing non-compliant behaviour was found to impact incident reporting and lead to underreporting. Its common, for example, to see a leaderboard with number of days since an accident. As part of deliberate effort to create a safe working culture this can have a powerful effect.
Reward Schedules: The longer the wait until a reward or punishment is applied, the less of an effect it will have on behaviour. This is important for the timing of punishments or rewards. You should try to find ways of delivering rewards or punishments as soon as possible after the behaviour that needs to be influenced. Consider the annual audit process – feedback up to a year after a behaviour occurred is unlikely to be influential unless it is particularly severe.
So what does this mean for information security? Basically, if non-compliant behaviour is difficult to detect in your organisation, or its not practical to punish, then you may want to consider using rewards.
Update: last month we discussed ways that security awareness has been lowered by bad advice. How about another example – for years we told users to look out for the padlock in their browser because that meant that a website was ‘safe’. Actually it meant nothing of the sort and, ironically, attackers seem better than most Web Admins in setting up sites without annoying security warnings. In words of Cormac Herley that I couldn’t put better myself: “In fact, as far as we can determine, there is no evidence of a single user being saved from harm by a certificate error, anywhere, ever.”