It is easy to ascribe ‘human error’ to diving incidents because we often lack details about what happened. It is also perversely satisfying to blame someone, an individual, rather than attribute it to a system issue. Part of this is because we can then start internalising this, distancing ourselves and say that “we wouldn’t have made that mistake”, a natural human reaction.
Unfortunately looking to blame individuals, calling them ‘Darwin Award winners’ or pointing out their stupidity, does nothing to help identify what the real issues were which led to the adverse event, nor do these actions help improve learning because those who have had near misses are scared of the social media backlash when posts are made about events which are so ‘obvious’ in their outcome.
This short piece will cover the Human Error framework from James Reason and look at ways in which we can use this to improve safety and human performance in diving.
The Swiss Cheese Model
Professor James Reason described a concept called the Swiss Cheese Model in his book Human Error. This was a linear process by which layers (defences and barriers) were put in place by organisations, supervisors and individuals to prevent adverse events from occurring. Unfortunately, because nothing is perfect, these layers would have holes in them which would allow errors to propagate on a trajectory. The layers would consist of items such as organisational culture, processes and procedures, supervision and leadership, fatigue and stress management, equipment and system design, and adequate human and technical resources, all with a view to reducing the likelihood of an adverse event.
The model is split into two parts, latent conditions and active failures.
Active Failures
Active failures are the unsafe acts committed by divers who are in direct contact with the system or activity. They take a variety of forms: slips, lapses, mistakes, and violations. In diving this could be inability to donate gas in the event of an OOA situation, an inability to ditch weight belts, misreading a dive computer or going below gas minimums.
Latent Conditions
Latent conditions are the inevitable “resident pathogens” within the system. There are two negative effects caused by latent conditions: they create 'error provoking' conditions within the immediate diving situation (e.g. time pressures, understaffing, inadequate equipment, fatigue, and inexperience) and they can create long-term weaknesses or deficiencies in the defences (unreliable alarms and indicators, procedures which are unworkable, design and manufacture deficiencies...) Latent conditions may lie passive and unnoticed within the system for a significant period of time before they combine with active failures and local triggers to create an accident opportunity. Shappell and Weigmann expand on this in their HFACS model.
Reason’s simplistic model demonstrates that only when all the holes are lined up, does an adverse event occur. However, as we know, the world is a dynamic place and humans are somewhat variable in nature, and therefore the holes move and change size too. This means that breaches in the barriers further up the model may be protected by a barrier lower down. Often it is the human in the system who provides this final barrier. This video shows the model in action.
Animated Swiss Cheese Model from Human In the System on Vimeo.
Biases
Unfortunately, when we are looking at incidents and accidents for lessons learned, we come across multiple biases which cloud our decision making processes. A few are covered below and more are on another blog I have written.
Firstly, we are biased in the way we think about time as a factor in incidents. We are used to time as being linear process (because it is!) but adverse events are often a combination of events which don’t necessarily follow the same timeline and involve different systems at play which we are unable to view. After an incident we can piece the jigsaw puzzle together but in real time this is much more difficult to do given our limited short-term memory and as such the decisions we make are based on incomplete information.
Secondly, we suffer from hindsight bias which means we can see factors which are relevant to the event but we were unaware of at the time. An example of this might be someone who is not experienced in an overhead environment becoming disoriented and drowning but observers would say “I knew this was going to happen, you only needed to see their attitude to safety.” and yet that diver may have undertaken similar activities without an issue. Knew infers a high level of certainty whereas we cannot predict with 100% accuracy how things are going to turn out.
"There is almost no human action or decision that cannot be made to look flawed and less sensible in the misleading light of hindsight. It is essential that the critic should keep himself constantly aware of the fact."
Hidden QC
Finally, outcome bias. We judge adverse events which end in an injury or death as much more serious than if the event is a non-event, i.e. nothing bad happened. An example might be a diver who didn’t analyse their nitrox gas prior to a dive and nothing happened, but on another dive, the gas station technician had mixed up cylinders and had filled the cylinder with 80% instead of 32% and the diver suffered from oxygen toxicity and had a seizure at depth. More...
Last edited: