A couple of social media posts about diving incidents and near misses have triggered this blog because the term ‘entirely predictable outcome’ has been used to highlight that someone shouldn’t have done what they did because it was obvious that it would end up with an injury or death. The problem is that such statements, as they applied to those particular situations, are false, even when the commentators are biased because of hindsight.
To explore this, let's look at the dictionary definitions of ‘entirely predictable’. Entirely means ‘completely’ or ‘to the full extent’ and predictable means ‘always behaving or occurring in the same way as expected’. So entirely predictable means that on 100% of occasions the outcome would be as it was experienced in the particular occasions. If this was true, people would not do things which ended up with them injured or dead (unless they truly had suicidal tendencies, and those people are few and far between).
This concept of predictability or certainty is at the core of risk management. However, the research shows that risk is not managed in a logical manner for the majority of people in the majority of situations and there are a couple of reasons for this. Kahneman and Tversky wrote a whole book on the topic! For a risk to be managed in a logical manner, the outcomes need to be entirely predictable and the following will show why this doesn’t apply to diving.
Rolling the dice
Predictability is the concept in which casinos use to make money. For example, the likelihood of a 6 being rolled on a 6-sided dice as 1:6 can be quantified, or the likelihood of pulling a single card out of a new deck of cards can be calculated as 1:52. These are entirely predictable because there are only a certain number of outcomes possible. I cannot roll a 7 on a single 6-sided dice nor can I draw an ace of circles from a deck of cards. However, if I roll a six-sided dice and get a 6 on the first roll, I have as much chance of getting a 6 on the next roll because the outcomes are independent of each other. If I roll eight sixes in a row, there is still a 1:6 likelihood that the next roll will be a six. To show an even distribution of the sides of the dice, we need to roll the dice lots of times (and even then it will have bumps in the distribution). This is why small numbers don’t work, even for predictable circumstances. Furthermore, predictability is why you can’t bet on a 7 in a two-dice game in a casino. Given the possible combinations to add up to seven, it is the most prevalent number when rolling two dice together and the house always wins.
Uncertainty in Diving
Moving to diving, there are so many variables involved that predicting their likelihood as single entities or as combinatory scores would be almost impossible. As such, when we go diving (or anything where we can’t actually quantify outcomes through absolute measurement) then we aren’t managing risk, we are managing uncertainty. For example, will my equipment fail on this dive? How effective will the decompression be? What is the current going to be like? What is the visibility? Taking it further, I am using my rebreather outside the tested environment, how reliable will it be? I fitted my oxygen cells 15 months ago, will they still work ok? These are all uncertainties which can't be measured from where you are now.
Each of these uncertainties will have a benefit or loss associated with it and consequently, the concepts of benefit vs loss are still the same i.e. we make the decision based on whether the reward is worth the potential loss and only we know what the reward is worth to us or the team (if we have talked about it). Weirdly, the equation between gain and loss does not end up as a straight 1:1 relationship. Tversky and Kahneman (authors of Thinking. Fast and Slow) showed that if we are in a ‘status quo’ position and someone asks us to change to something else, we would need to have 2-3 times the perceived benefit before we would consider the change. Note that it is perceived benefit and ‘benefit’ can mean different things to different people e.g. money, kudos, prestige or just wanting something better.
Managing Uncertainty
So how do we manage uncertainty? When we have measurable uncertainty (risk), we use logical tools like failure modes effects analysis (FMEA) to determine likelihoods and then link this to what is an acceptable failure rate and associated loss e.g. for aviation, the catastrophic loss of an aircraft is considered to be acceptable at 1 loss in 10 million flying hours.
For areas where we have unmeasurable uncertainty, many researchers have shown that we use mental shortcuts because we are not able to make those computations. These shortcuts are ‘rules of thumb’ or cognitive biases and they help us make our decisions.
Such biases include:
Hindsight bias - I knew it was going to happen that way.
Outcome bias - the more severe the outcome, the harsher we judge the action.
Recency bias - something more recently experienced is acted upon first.
Availability bias - information I can recall easily means it is more prevalent or likely to happen in the real world.
Selective attention bias - we only focus on what is perceived to be important right at that time, with the level of ‘importance’ determined by previous experiences.
Overconfidence - that we believe we are capable of doing something better than we actually are. This is linked to the Dunning-Kruger effect.
Notwithstanding the negative aspects of biases, they are essential for humans to operate at the pace we do and to minimise the amount of mental energy which we consume. The brain is one of the major consumers of energy in the body, believed to consume approximately 20% of our energy. As such, anything we can do to be cognitively efficient is a good thing. However, such biases can lead to errors which can lead to injuries or deaths.
Continued below...
Last edited: