Human error is not a cause!
Error, mistake, faux pas, gaffe, blunder, lapse, slip, goof, oops, blooper; how many phrases do we have to express the idea that things don’t always happen as we expect or as we would prefer? At the 2009 CEO Conference of the Institute of Nuclear Power Operations (INPO), one CEO stated that the most important change in the commercial nuclear industry in the past decade was the recognition that people do not intentionally commit errors. INPO’s training reference guide that introduced the commercial nuclear power industry Human Performance Improvement (HPI) initiative stated that HPI represented “a new way of thinking.” So the question is, how might we think differently about this concept of error that seems an inevitable aspect of the human condition?
The “fact” that some 80% of accidents are “caused” by human error appears in much of the safety literature. Formal accident investigation attributions of error as cause have been used for justification of blame and punishment, ostensibly to “prevent” recurrence of similar accidents. Yet after decades of labeling human error as cause, what do we really know scientifically about this fundamental human concept?
Much of the scientific work on accident causation can be traced to the aftermath of the Three Mile Island accident. Woods and Cook1 explain the situation as: “At that time, the folk model of accident causation was firmly in place among researchers and error seemed a plausible target for work on safety. It was only after a long period of empirical research on human performance and accidents that it became apparent that answering the question of what is error was neither the first step nor a useful step, but only a dead end.”
As James Reason explains in his book Human Error, error means different things to different people, and depends on context. In Latin the meaning of error is “to wander”. In baseball an error is the act, in the judgment of the official scorer, of a fielder misplaying a ball in a manner that allows a batter or base runner to reach one or more additional bases, when such an advance should have been prevented given ordinary effort by the fielder. In computer operation, an error is when an unexpected condition occurs.
The utility of error as causation is further complicated since error cannot be isolated as a particular psychological or behavioral phenomenon. Addressing efforts by cognitive psychologists to identify error types, Reason states that “Far from being rooted in irrational or maladaptive tendencies, these … error forms have their origin in fundamentally useful psychological processes.” He continues quoting Ernest Mach (1905) “knowledge and error flow from the same mental sources, only success can tell one from the other”.
So it seems that what may be called error is distinguishable only retrospectively in the presence of an undesirable outcome. Absent such an outcome, error is not observable. So if error is not observable sans outcome, is there any utility to this concept which is so rooted in the cultural views of causality yet so lacking in scientific validity?
Returning to Woods and Cook, “Error is not a fixed category of scientific analysis. It is not an objective, stable state of the world. Instead, it arises from the interaction between the world and the people who create, run, and benefit (or suffer) from human systems for human purposes-a relationship between hazards in the world and our knowledge, our perceptions, and even our dread of the potential paths toward and forms of failure.” …”To use “error” as a synonym for harm gives the appearance of progress where there is none“
If the concept of error has no particular value in analysis of failure, and indeed, that such use may be counterproductive, perhaps its value lies elsewhere. Viewing error as a fuzzy concept, rather than an absolute one, provides a basis for proceeding. William James’ philosophy of Pragmatism relates meaning to a concept’s purpose. Operationalization is the process of defining a fuzzy concept so as to make the concept measurable in form of variables consisting of specific observations. W. Edwards Deming explains that “An operational definition is a procedure agreed upon for translation of a concept into measurement of some kind.”
How might we understand error in a purposeful sense that promotes the human condition. Consider, as an example, physical pain. Pain may be understood as a negative consequence; something to be avoided or even feared. Alternatively, pain may be understood as one of the body’s key defense mechanisms, the purpose of which is to alert us of a threat to the body’s safety or survival. Similarly we may shift the meaning of error as harm, to error as warning of harm. Thus error becomes a signal to prompt protective actions.
Reason offers three related “working” definitions of error, each predicated on a retrospective judgments of not achieving the desired outcome from pursuing a predetermined course of action. He then suggests that error be understood in terms of intentions, actions and consequences. He also suggests that error be extended from purely an individual phenomenon to include organizational phenomena. So if we understand error as signal operating with intentions, actions and consequences, we can view this formulation equivalent to Deming’s description of the Shewhart’s Cycle of ‘Plan, Do, Study, Act.’ In this way, error become signals that enable individuals and organizations to monitor the relationship of doing with the plan in relationship to anticipated outcomes and then adjusting the plan and actions based on the feedback provided by error.
Error is life providing feedback on our interactions with the environment. By shifting our paradigm of error from one of fear and blame to one of system feedback, we find that error is nature’s way of helping us proceed incrementally toward our goals while coping with an uncertain universe.