Over the past few weeks this blog has featured the experiences and insights at three DOE sites where high reliability concepts are being applied to reduce gaps between work as imagined and work as done. In June their stories will be presented at the Probabilistic Safety Analysis and Management Conference. Over several months the authors collaborated with me in their “spare time” to prepare a paper on what they have done, what they are doing, and what they have learned.
This blog entry is to provide a macro context for their work to ground it in high reliability research and historical developments that have led to this point.
“Work as Imagined versus Work as Done” is both a phrase and a concept firmly established in the lexicon of high reliability and resilience scholarship, yet seldom appreciated in the often rough and tumble realities of complex, high hazard operations. Perhaps in the better of these, what are referred to as Highly Reliable Organizations (HRO), those who hold management positions once understood what the “doing” of sharp-end work was like; but, even in the best organizations, over time that knowing erodes and other demands take precedence over the realities confronted by those who live at the sharp end. Edgar Schein’s exposition of the executive, engineering, and operator cultures is a classic portrayal of this truth of complex sociotechnical systems.
Influenced by the “error management” thinking of prominent researchers, such as James Reason, and high reliability researchers, such as Karlene Roberts and Karl Weick, the emerging and well-articulated thinking of the resilience school is now prompting practitioners to reflect on what it means to adapt continuously to unexpected conditions and competing demands. The deterministic certainty of safety analysis driven work controls and procedure compliance is gradually being unsettled by the epiphany that deterministic controls cannot control the unexpected.
Lewis Carroll’s logic may inform the current situation: “It’s a poor sort of memory that only works backwards.” Similarly it’s a poor imagination that can only envision what was dreamed in the past, yet cannot conceive of realities being confronted in the moment.
To the extent that Work as Imagined versus Work as Done has penetrated the consciousness of most organizations, it has done so with the presumption fully intact, that work as it is imagined is the standard by which work as done is to be judged. To even presume that the reverse might be true is an anathema to most responsible managers, much less to the regulator. But might there be cases where this Alice in Wonderland logic might prevail? How might a non-judgmental look at work as done reveal new understandings? Might reversing the looking glass enable us to find heretofore unimagined ways to enhance performance, improve safety, reduce costs and, in fact, advance more egalitarian respect for the multiple disciplines and perspectives that enable the survival of these complex sociotechnical systems? Is it within our organizational cognitive abilities to grasp, as Dekker and Suparamaniam have phrased it, that “The gap between work-as-imagined and work-as-done is not about people not knowing or caring enough about the organization. Rather, it is about the organization not knowing enough about itself.”?
Within a young, but established, community of practices in the Department of Energy (DOE), work is underway to promote a greater understanding of how work is done in DOE facilities. As DOE continues to seek appropriate balance as owner and regulator, the desirability of an advanced governance excellence model in preference to a prescriptive, deterministic regulatory model receives increasing attention. The work described herein may provide experiential support to this regulatory dialogue. Additional benefit may be to inform personnel succession strategies by suggesting requisite work process knowledge needs for new personnel. And lastly, and, perhaps more importantly, it may be that effort by this community of practice could affirm and reveal how the talented and committed people of the DOE community labor under complex, hazardous circumstances within a context of shifting political and budgetary dynamics, yet, through little understood adaptive local practices, daily manage to create and produce while steadily improving the level of safety for workers, the public, and the environment.
Somewhat over 9 years ago, collaboration began between the DOE and its contractor partners to explore new ways to think about improving performance in our enormously diverse operations. For those who don’t know DOE, it is an executive agency of the U.S. Government. DOE’s missions are to advance the national, economic, and energy security of the United States; promote scientific and technological innovation; and ensure the environmental cleanup of the national nuclear weapons complex. The agency is responsible for 24 preeminent research laboratories, production facilities, and four power marketing administrations; environmental cleanup from 50 years of nuclear defense activities covering 2 million acres; with an annual budget of about $28 billion. Our work is performed by approximately 14,000 Federal employees and about 150,000 contractor employees.
This new cognitive journey did not emerge out of thin air: it was a continuation of efforts that had been underway since the mid-1990s when DOE made a rather bold move to apply a standardized, high level safety management model to all its diverse operations, a model referred to as Integrated Safety Management (ISM). Perhaps the most novel feature of ISM was that it departed from a purely prescriptive regulatory mode by calling for a high level management system approach consisting of a set of principles and functions that served as guidelines tailoring management systems to the work and hazards of individual facilities. Although far from non-controversial initially, the ISM model has, over the years, met with widespread acceptance throughout the Department.
While ISM was novel, it retained traditional approaches requiring detailed design specifications and hazards analyses from which are derived a series of controls implemented through training and procedures, as well as programs of continuing feedback and corrective action fueled through self-assessment, performance metrics and oversight. The idea of “verbatim compliance” is often associated with such approaches.
With a period of somewhat more stability engendered through establishing the ISM framework, a few adventurous pioneers began looking external to DOE to seek refinements that might yield further improvements in safety and performance. For the sake of accuracy, it is appropriate to say that during the years of discussion accompanying the formalization of ISM outside input was solicited. For example, over several years, DOE benefited from visits and presentations by Dr. James Reason, Dr. Karlene Roberts, John Wreathall, and representatives of the Institute of Nuclear Power Operations (INPO).
As many senior managers in the DOE community have backgrounds in nuclear science, engineering, or operations, some were particularly curious to learn more about the Human Performance Improvement (HPI) initiative of the U.S. commercial nuclear power industry. The HPI initiative was developed through INPO, and, while not using the term “High Reliability Organizations,” was in fact an explicit linkage between the body of HRO-related research and practical application through industry developed and validated practices.
DOE had a longstanding relationship with INPO, and arrangements were made to introduce HPI to a few interested organizations though briefings and site visits. One site, the Idaho National Laboratory, became particularly enamored with this new approach and began a committed effort to apply HPI theory and selected techniques to work at the Laboratory. Collaboration began among Laboratory DOE officials, Laboratory operating contractor officials, and DOE Headquarters (DOE-HQ) staff, who maintained liaison with INPO. This collaboration evolved organically, stimulating interest in HPI at other DOE operations. In response, DOE HQ offered to provide a “train the trainer” session so that interested DOE contractors could develop the internal knowledge to begin educating personnel at their respective sites about HPI fundamentals.
It was emphasized from the start that, while the HPI initiative provided principles, concepts, and tools, fundamentally it represented a cognitive shift, a new way of thinking. Informally, a DOE community of practice began that continues almost 10 years later. Formation of this community of practice also marked the beginning of an implicit strategy to view DOE as a complex sociotechnical system and as such to “vet” the HPI framework and value proposition consistent with social diffusion theory. A social diffusion approach was in stark contrast to the prevailing mechanistic management approach of top-down direction of new initiatives.
As demand for “train the trainer” continued, additional sessions were conducted, and pilots were initiated. After about 18 months of training and pilot efforts, a workshop was conducted at the Oak Ridge National Laboratory. The workshop was attended by 240 representatives from all major DOE sites and included presentations from the U.S. Navy Human Performance Center, the International Society for Performance Improvement, INPO, Ontario Power Generation, and the Electric Power Research Institute. Also attending were the pioneer of HRO studies, Dr. Karlene Roberts; Dr. Robert Holmes, who spoke about HPI and complexity theory; and numerous DOE speakers, who discussed pilot efforts underway.
As within other safety-critical organizations embarking on high reliability journeys, the first challenge was to address the folklore of human error. The DOE core group was mindful of the high reliability work of Roberts and Weick, and the then-emerging resilience discussions by Hollnagel, Dekker, Wreathall, and others. James Reason’s Managing the Risks of Organizational Accidents, and Sidney Dekker’s The Field Guide to Human Error Investigations, were adopted as the introductory texts to guide DOE HPI thinking, and the INPO HPI fundamentals training was adopted and modified for the DOE community.
The early successes with HPI, including a multiple contractor site-wide pilot at Hanford, made way for evolving HPI thinking to a broader HRO concept. A strength of the HPI framework is that it clearly debunks the idea that error is intentional and deserving of blame and emphasizes, instead, that organizational and cultural factors are antecedents to error. Yet retaining the label of “human error” is a “latent weakness” of the framework as it tempts managers and designers to continue focusing on sharp-end prevention of error. To directly confront management’s principal responsibility for change, some began to realize that a more direct, management-focused effort was needed.
The DOE Pantex contractor saw the opportunity to build upon success with ISM and HPI by engaging the management team in a dialogue on HROs. Through a seminar series conducted by a staff scientist, the management team was provided with a basic knowledge of HRO theory and literature and was led by the organization’s president in reflective discussions challenging whether their organization demanded a high reliability approach. This 9-week seminar series was followed a few months later by a joint meeting of Pantex and the British Atomic Weapons Establishment for a lecture/discussion forum on HRO and its application to nuclear defense operations. As a result of this lengthy period of study and deliberation, Pantex embarked on a new learning model that shifted from ex post facto learning by investigating regulatory reportable events to deliberative study of “low consequence, no-consequence information rich events.” In short, the Pantex organization developed a learning system that blended HPI principles with an explicit HRO framework. This model was applied to examining management concerns, not necessarily reportable in nature, to see what could be learned about the fundamental culture, structure, and behaviors of the organization.
Two other developments were essential to promote awareness about “Work as Imagined versus Work as Done.” First was the establishment of an HPI Task Group by the Energy Facility Contractors Group (EFCOG), and second was the formation of a joint DOE/EFCOG working group on Safety Culture.
EFCOG is a formal teaming effort of contractors who operate DOE facilities. They collaborate in addressing issues of cross-cutting application that transcend their separate operational units in response to DOE issues or issues emerging from the external environment, such the NASA Challenger accident. They focus on mutual analysis, education, and development of recommended practices. In 2005, EFCOG formed an HPI Task Group to share lessons learned and effective approaches in HPI implementation. From 2000 until 2005, DOE HQ had provided HPI training and facilitation support to DOE contractors. In 2006, EFCOG proposed that they build on their collective experience with pilots and assume a primary role in training, facilitation, and development of lessons learned and effective practices in HPI implementation. The DOE Savannah River Site (SRS) had begun a multi-contractor site-wide implementation of HPI, so a senior SRS representative teamed with the HPI lead for Pantex as co-chairs for this EFCOG effort. The Brookhaven National Laboratory had embarked on Laboratory-wide HPI applications and significantly contributed to evolving discussions of high reliability and human performance. DOE HQ shifted its role from resource provider to facilitator and increased involvement in interagency HRO and culture discussions as well as greater involvement with academic researchers and non-government organizations, including the medical community.
In 2007, EFCOG established a Safety Culture Working Group with participation of DOE and the Defense Nuclear Facilities Safety Board (DNFSB). DOE and its contractors had studied the Davis Besse nuclear plant reactor head event of 2002, the Columbia accident of 2003, and the BP Texas City accident of 2005. The DNFSB also reviewed those events and published TECH-35, Safety Management of Complex, High-Hazard Organizations, which examined lessons learned from those events and drew connections among HRO, HPI, and ISM. The DOE community was also aware of safety culture deliberations by the Nuclear Regulatory Commission and was familiar with the safety culture work of INPO. During over a year of work by this group, members were exposed to a wide range of literature, including the HRO work of Pantex and leading HRO researchers, such as Roberts and Weick, and to Resilience Engineering work by Hollnagel, Dekker, Wrethall, and others; organization culture work by Edgar Schien; and the research of leading European researchers, such as Reiman and Odewald.[7,8] Many of the executives in this group, populated with contractor executives by design, had direct involvement with or knowledge of HPI efforts since the early 2000s. Engaging senior executives, most of them engineers by training, with a substantial dose of psychology and sociology was challenging to say the least. They often felt as Alice, that it would be so nice if something made sense for a change. Thus, the stage was set for venturing beyond deterministic models to peer into realms of socially constructed knowledge and work practices as discussed by Orr, Orr and Barley, Brown and Duguid, and Jordan.
The phrase “Work as Imagined versus Work as Actually Done” appeared in Sidney Dekker’s chapter in Resilience Engineering. Of course the concept had been discussed by many involved in resilience discussions going back to the early writings of Jens Rasmussen and, in different terms, by Brown and Duguid, who expressed it thusly: “practice is central to understanding work. Abstractions detached from practice distort or obscure intricacies of that practice. Without a clear understanding of those intricacies and the role they play, the practice itself cannot be well understood, engendered (through training), or enhanced (through innovation).”
Dekker’s phrasing has a certain resonance with many managers, particularly those who have had “hands on” responsibility for complex technical operations. Grasping the essential truth of this duality becomes more challenging for those whose experiences lie in policy or academia as compared to operations or experimental physical sciences. It is, however, a phrase that resonated with certain DOE operations, in particular with the three whose efforts have recently been chronicled in this blog.
For the researchers who have shown us new directions and the adventurers who are working to convert theory to practice in their continuing efforts to improve safety and performance, I salute you. In particular, I wish to acknowledge those at Hanford, Pantex and Savannah River who have shared their learning in the recent blogs; thank you Shane Bush, Brian Harkins, Rick Hartley, Kimberly Leffew and Bill Rigot. And thanks also to Dr. Karlene Roberts (U.C. Berkeley), Dr. Bob Bea (U.C. Berkeley School of Engineering), Dr. Najm Meshkati (U.S.C.), Robert Sumwalt (NTSB), Chris Hart (NTSB), Bill Hoyle (retired CSB), George Mortensen (INPO), Brian Baskette (INPO), John Summers, and Tony Muschara (both retired from INPO) for your continuing inspiration and support for our efforts.
 K. Weick, Sensemaking in Organizations, Sage Publications, 1995, Thousand Oaks.
 S. Dekker and N. Suparamaniam. “Of Hierarchy and Hoarding: How “inefficiencies” actually make disaster relief work,” The Australian Journal of Disaster and Trauma Studies, volume 2006-2, pp. 40-57, (2006).
 J. Reason, Managing the Risks of Organizational Accidents, Ashgate Publishing Company, 1997, Burlington.
 S. Dekker, The Field Guide to Human Error Investigation, Ashgate Publishing Company, 1997, Burlington.
 Defense Nuclear Facilities Safety Board. “Safety Management of Complex, High-Hazard Organizations,” TECH-35 (2004)
 E. Hollnagel, D.Woods, and N. Leveson, Resilience Engineering: Concepts and Precepts, Ashgate Publishing Company, 1997, Burlington.
 P. Oedewald and T. Reiman. Special Characteristics of Safety-Critical Organizations—Work Psychological Perspective, VTT Technical Research Centre of Finland (2007).
 T. Reiman and P. Oedewald. Evaluating Safety-Critical Organizations—Emphasis on the Nuclear Industry, Report No. 2009, VTT Technical Research Centre of Finland (2009).
 J. Orr, Talking About Machines: An Ethnography of a Modern Job, Cornell University Press, 1996, Ithica.
 S. Barley and J. Orr (eds.), Between Craft and Science: Technical Work in the United States, Cornell University Press, 1997, Ithica.
 J. Brown and P. Duguid, Organizational Learning and Communities-of-Practice: Toward a Unified View of Working, Learning, and Innovations. The Institute of Management Science (INFORMS) (1991).
 B. Jordan. Notes on Methods for the Study 0f Work Practices. Unpublished (2007) , and Ethnographic Competence: Personal Reflections on the Past and Future of Work Practice Analysis, Unpublished (2008.) Available at http://www.lifescapes.org/Writeups.htm.
 J. Rasmussen. “The Role of Error in Organizing Behavior,” Qual Saf Health Care, volume 2003-12, pp. 377-385, (2003).