New models yield new indicators and improvement follows

(Editor’s Intro) I’m pleased to introduce a new posting from Mr. M. Dexter Ray (Dex) from the DOE Savannah River Site where he is with the URS Washington Division in the ESH&QA Department at Savannah River Remediation (SRR).  Dex is the project manager for site wide application of the Savannah River high reliability improvement efforts derived from the commercial nuclear industry based Human Performance Improvement concepts.  Dex holds a Master of Science Degree in Project Management from University of Wisconsin and a Bachelor of Science in Engineering from University of Southern Illinois and also is PMP certified.  He has twenty-eight years of experience involving construction, operation, maintenance, project management, quality engineering, project design, work planning, procedure writing and training in Nuclear Power Generation and DOE Government facilities.  Previous positions include, Project Manager, Quality Assurance Deputy Manager, Training & Procedure Manager, Field Engineering, Electrical Design Engineering, and Start-up Engineering.

Error Coding Issues

A site wide tracking tool used to identify, and analyze issues in our processes and provide leading indicators for continuous improvement within our organizations.


Performance indicators are metrics that are designed to provide a manager with an indication of the current status of process.  Usually, performance indicators measure the outcome or final production of a process (such as number of widgets produced per day); since the value of such an indicator cannot be determined until the completion of the associated process, it is considered to be a lagging indicator.  In a repetitive process or production situation, where future outcomes can be adjusted based on past outcomes, lagging indicators can be easily used to manage the process.  However, it is often desirable to be able to predict the outcome prior to the completion of the process, or even before the onset of the process.  In these situations, an outcome-based would not be available, and an alternative, a leading indicator, becomes the metric of choice.  In other situations such as with processes designed to prevent losses or ensure personal or facility safety, the intent is to avoid a detrimental outcome completely (such as an accident).

Therefore, the process of selecting an appropriate set of leading indicators is deceptively simple:  (1) select an appropriate set of goals; (2) identify the processes and programs that are essential components of satisfying the goal; and (3) determine the metrics that best monitor the ongoing status or functionality of those processes and programs.

The problem becomes one of deciding which artifacts best indicated the program’s ability to provide the function necessary to achieve the selected goal, which is an important thing to keep in mind.  Since the goals cannot be achieved without the satisfactory performance of Humans that operate these programs, a set of metrics were selected and monitored for Human Performance errors.  Human errors associated with the events are the same as the non-consequential errors.  How can we start to capture these 600 non-consequential errors at the bottom of the triangle so we can learn from them?

Heinrich’s severity pyramid demonstrates the large number of non-consequential errors that occur for the small number of significant events.

In January 2009, the site HPI Working Group sponsored an initiative to identify codes to capture these non-consequential human errors based on the INPO Performance Model.  The group analyzed over 1000 issues and developed a total of 150 common Human Performance error codes for the workplaces.

The premise is the worker, being a product of the organizational environment, acts in the same way day-in and day-out.  Frequently an error is provoked by flawed defenses, latent organizational weaknesses, or error precursors occur.  By coding and trending the causes of these errors, one can gain insight into the local work practices and deep assumptions of our organizational culture.

We have identified and trained peer workers to be “Issue Analysts” at each facility who play a key role in ensuring data is accurate and consistent throughout the site.  These Analysts are the ones that evaluate and code each issue and enter them into our system.  The site uses a database system called “Site Tracking, Analysis, and Reporting (STAR)”.  The STAR system is an electronic format where issues are entered; evaluation results captured, and associated actions tracked to closure.  In addition, STAR has the capability to provide performance analysis, and leading indicators for monitoring/trending that is accessible by the entire organization.

Examples of error coding used at the site:

Topical Areas:

FJ Job-Site Conditions

FL Leadership Practices

FO Organization Factors

FW Worker Behavior

FP Plant Results & Process Improvements

Each month the sum of the error codes are analyzed and the common areas are identified such as job-site conditions (FJ), workarounds (FJ03), Environmental conditions (FJ02), etc.  The top four highest percentage error codes identified by the issue analyst are presented to the Management Review Team (MRT) for verification and feedback throughout the organization.  The MRT will then assign corrective actions to address the four top common errors to keep future events from happening.

Over the past six months, the error coding has been proven to be a valuable leading indicator at the lowest level at the facility.  Over one hundred fifty processes/systems have been improved based upon these leading indicators that were addressed.

Below are some examples of leading indicators that were developed from our star system based on the error coding by our analysts.






Procedure Use and Adherence



Self Checking

10 %


Clear Expectations



Procedures and Revisions Program



Management Oversight

Example of a trend of errors leading to improvement in a process:

Turnover Process:

Of a hundred minor consequence and non-consequential errors reported over a three month period, over twenty were connected with turnovers, indicating a possible error-provoking condition with the turnover process.  Further evaluation of the turnover process identified multiple, un-integrated turnover dbases that did not communicate with each other and placed an unnecessary burden on the shift workers.  Actions were identified to consolidate and streamline the dbases, leading to less future errors.

Pre-Job Briefing:

During June 2009 the pre-job briefing process was identified as the top coding issue.  Management reviewed the data with workforce and determined that the cause was that all sites were not using the latest approved procedure and checklist.  Based on this leading indicator, we developed a Site-wide communication to correct the problem.

Procedure Use:

During August 2009, we started the American Recovery and Reinvestment Act (ARRA) of hiring people with no knowledge of the site policies and procedures.  Our procedure error code was the top area of concern.  Based on this leading indicator, the site started mentoring and coaching each employee on procedure use and adherence prior to being assign to the facilities, which corrected the trend the following month.

Error Reduction Tool Use:

During November 2009, both contractors SRNS & SRR were in the process of hiring 2500 new employees to support the ARRA.  These new employees were never exposed to Human Performance error reduction tools, so our HPI error code hit the number one spot.  Based on this indicator our management team developed an HPI briefing on tool re-enforcement for all new ARRA employees, which corrected the negative trend.


The key to determining a good set of leading indicator is to first understand what the goal is and then understand what programs are essential to meeting that goal.  Since our goal cannot be achieved without the satisfactory performance of Humans that operate these programs/systems, we have selected a set of metrics (error codes) to monitor Human Performance errors.  These leading indicators can be used at a variety of organizational levels, but ideally should be applied in a consistent, low level approach to the entire organization to optimally balance the various interlinking and completing priorities of any organization.

Furthermore, the process determining leading indicators should be a “living” process, with frequent reviews of the selected goals and the essential programs for which the set of indicators was derived.  As the organization transitions through its life-cycle or responds to changes in its environment, the previous set of leading indicators may no longer be consistent with the direction the organization intends to move towards.

3 Responses to New models yield new indicators and improvement follows

  1. What are the factors that result in the nature, the magnitude, the location, and the timing of the procedure use and adherence errors?

  2. Vicky M. Garner says:

    Factors that result in PU&A errors are:

    1. Inexperience
    2. Lack of Risk Recognition
    3. At risk behaviors by senior workers passed on to less knowledgeable junior workers (filling in the blanks when the work document is lacking)
    4. Misinterpretation

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: