Accident causation model

[PEREZGONZALEZ Jose D, ZiZhang NG & Abdul M YOOSUF (2011). Accident causation model. Journal of Knowledge Advancement & Integration (ISSN 1177-4576), 2011, pages 25-27.] [Printer friendly]

Accident causation model

The Accident causation model, better known as the 'Swiss cheese model', is a theoretical model that illustrates how human error at all levels in the organization may lead to accidents. It was first published by James Reason in 19902.

The model is well suited to complex production systems, where a hierarchical organizational structure tends to exist (managers, front-line personnel, physical and operational barriers, etc). Because managers, staff and barriers are physical objects, we can call them structures or structural elements, in contrast to, for example, ideas or beliefs (Pérezgonzález, 20051). These structural elements are, actually, the main focus of Reason's model.

The basic structural elements identified in the model are the following:
  • Decision makers. These include high-level managers, who set goals and manage strategy to maximize system performance (eg, productivity and safety).
  • Line management. These include departmental managers, who implement the decision makers' goals and strategies within their areas of operation (eg, training, sales, etc).
  • Preconditions. These refer to qualities possessed by people, machines and the environment at the operational level (eg, a motivated workforce, reliable equipment, organizational culture, environmental conditions, etc).
  • Productive activities. These refer to actual performance at operational levels.
  • Defenses. These refer to safeguards and other protections that deal with foreseeable negative outcomes, for example by preventing such outcomes, protecting the workforce and machines, etc.

Accidents occur because weaknesses or 'windows of opportunity' exist or, else, open up at all levels in the production system, allowing a 'chain of events' to start at the upper echelons of the structure and move down, ultimately resulting in an accident if it is not stopped before such thing occurs. Said otherwise, most (if not all) accidents can be traced back to weaknesses at all levels in the system, including the decision makers level.

m2f4.jpeg
(Image embedded from CrewResourceManagement.net on 11 December 2009)

These weaknesses or windows of opportunities can be due to several factors, such as mechanical or technical failures, although, unfortunately, the 'human factor' seems to be the most frequent or most traceable source of failure in most accidents. These weaknesses, thus, map onto the normal structure, and, therefore, are particular to each organizational level. Human weaknesses in the system can be listed as follows:

  • Fallible decisions at decision makers level.
  • Line management deficiencies at line management levels.
  • Psychological precursors of unsafe acts at precondition levels.
  • Unsafe acts at production levels.
  • Inadequate defenses at the defenses level.

The Accident causation model occupies one of the last chapters in a book dedicated to the description of human error, especially from a cognitive psychology perspective (see Reason, 19902). Its placement in the book is relevant because it suggests several things:

  • Firstly, it expands the role of the human from a restricted view of humans as single entities into humans as entities within a more complex system (eg, an organizational system). That is, humans do not normally act alone but interact with those complex systems.
  • Secondly, Reason's model does not minimize or alter the typical view of humans as the main cause of accidents. In fact, what the model does is make more transparent that human error does not only happen at the front end of the system (eg, active errors or unsafe acts) but it can normally be found in other organizational layers (eg, the management layers). Except for the defense-in-depth layer, most of Reason's model deals with human error and not with system error. Thus, it is not a model of system performance but a model of human error in organizational systems. In so doing, it perpetuates the understanding that humans are fallible in the system, from which a reasonable conclusion may be that removing them from any layer of the system as far as practicable should render a safer system.
  • The typical representation of the model as a series of Swiss cheese slices illustrates that all layers have weaknesses, or windows of opportunity, whereas a negative input can pass through the layers and become an accident. Alternatively, those layers may also close its weaknesses so helping capture threats and, thus, preventing negative inputs from progressing further. Thus, a reasonable conclusion is that more organizational layers (or greater system complexity) is preferable to less organizational layers (or lesser system complexity), as more layers allows for greater chances of capturing or blocking the path of those inputs through the system. So far though, organizational complexity refers to the human side of the equation, as other potential barriers are not build up within the organizational layers but only at the defense-in-depth layer. Again, this emphasizes that the model is a model of human error within systems.

Authors

Jose D PEREZGONZALEZ (2011). Massey University, Turitea Campus, Private Bag 11-222, Palmerston North 4442, New Zealand. (JDPerezgonzalezJDPerezgonzalez).
ZiZhang NG (2008). Massey University, Turitea Campus, Private Bag 11-222, Palmerston North 4442, New Zealand. (ZiZhanG NGZiZhanG NG)
Abdul M YOOSUF (2009). Massey University, Turitea Campus, Private Bag 11-222, Palmerston North 4442, New Zealand. (yoosuf_amyoosuf_am)


Other interesting sites
320
Journal KAI
105px-Stylised_Lithium_Atom.png
Wiki of Science
120px-Aileron_roll.gif
AviationKnowledge
Artwork-194-web.jpg
A4art
Artwork-162-web.jpg
The Balanced Nutrition Index
Unless otherwise stated, the content of this page is licensed under Creative Commons Attribution-ShareAlike 3.0 License