Evaluating Safeguard Effectiveness

Home » Performance Improvement Articles » Evaluating Safeguard Effectiveness

Evaluating Safeguard Effectiveness

by Kevin McManus, Chief Excellence Officer and Systems Guy, Great Systems

Back when my son has just become a teenager, I included a hike up Half Dome, the iconic monolith in Yosemite National Park, as a side hike during one of our summer vacation visits. The rock did not look that imposing to me – I had climbed higher rock faces, with greater exposure, as a staff member at Philmont Scout Ranch. Besides, many other people were going up the rock face with a single cable as a staircase – it must be safe. Evaluating safeguard effectiveness was not yet a focus in my life.Half dome stairs and safeguards

My subconscious assumption, thinking back, was that I would somehow keep everyone safe. I would do this by imparting my rock climbing knowledge to them, both prior to, and during, the climb. My words would keep them safe! How often do you count on reminders and reprimands to make sure people do things right?

My words alone would make sure they kept their balance, maintained a firm grip on the cable, and focused like a laser on the path up the rock (and not on the thousand foot drop to either side). No other safeguards were needed to protect my family – no fall protection or rock climbing footwear, no practice in a similar setting, and no standard voice commands or repeat back rules.

It’s all beginning to sound a bit like a Clark Griswold experience, I know, but haven’t you done something similar before? Fortunately, we did not even have a near miss. My son chose not to go up the sheer rock face via the cable staircase. The climb to the base of the face had been enough of a workout for him that day. His mom and I made the climb up and back, and we would both do it again without concern.

I would not do that side hike again however with the relatively few number of quite weak safeguards that we had in place that day. It is simply too risky as a first climb given the skill mix of the family team, even in the best weather conditions. My perspective of what it takes to truly minimize human error has changed with experience and age. If we want zero errors, we need really great safeguards, not just the ‘run of the mill’ ones.

How effective are the safeguards that you use to prevent human error and equipment failure? How often do your root cause analysis efforts lead to a safeguard-focused fix versus a person-focused (weak) fix? Safeguard effectiveness is the key to minimizing daily errors and failures. This has been one of my key learnings over the past 11 years as a contract instructor for the TapRooT® root cause analysis process. As not all safeguards are created equal, safeguard effectiveness matters.

 

Evaluating Safeguard EffectivenesssThe Safeguard Effectiveness curve included in this post illustrates this key relationship. Daily human error rates and failures are directly related to safeguard effectiveness primarily, and safeguard number secondarily. Please note that I just said ‘safeguard effectiveness’ there, not just NUMBER OF! Layers of protection matter, but for a given package of safeguards to work, the effectiveness of each layer needs to be optimized.

One safeguard with an ‘engineered fix’ focus will probably reduce a given error rate as much as four or five weaker fixes such as a sign, a monthly training class, or the daily supervisor reminder. Our biggest problem is we fail all too often to question, let alone measure and trend, safeguard effectiveness.

DISCOVER More: Safety System Best Practices

 

Evaluating Safeguard Effectiveness versus Safeguard Number

I recently worked with a company that had an impressive set of safety systems in place – fourteen to be exact. On a daily, weekly, and/or monthly basis, a variety of audits, inspections, briefings, work plans and permits, JHAs and FLRAs, orientations and meetings, and training are used in an effort to minimize goofs. Sounds pretty impressive, doesn’t it?

Unfortunately, evaluating safeguard effectiveness was not a focus at this location. The number of times each safeguard was used was being tracked. Measures for evaluating safeguard effectiveness had not been defined. Big problems were being captured, and some daily near miss errors were being snagged, but safeguard strengths and opportunities for improvement were not evident. Limited opportunities for improvement or learning could be identified. How much value was each type of safeguard REALLY returning?

How do you measure the effectiveness of your key safeguards? For example, how effectively does each of your supervisors spend their thirty minutes or so of job preparation time each day? How effectively does each employee spend his or her twenty hours of computer-based learning each year? How effective is the 40 or so hours you invest in process audits annually? Many people have not thought about evaluating their safeguards in this manner, in spite of the monies invested.

EXPLORE More: Examining Job Hazard Analysis Effectiveness

 

Evaluating Safeguard Effectiveness Reduces Human Error Rates

The relationship between safeguard effectiveness and human error rates exists, whether we recognize it or not. In fact, by exploring the relative strength of your existing safeguards, you might find some patterns and systemic reasons that help explain why certain problems keep coming back. Effective safeguards reduce human error rates!

We are often lulled into an organizational trance where we think that our investments in rule enforcement, training, process improvement, and leadership development are keeping us error free. Not tracking daily errors does not mean that one is error free however. Where is the proof? Which safeguards provide more value than others? Which safeguards need improvement?

If we don’t measure daily process errors, how do we really know? Safeguard absence and failure leads to process errors, but few process owners track such errors. They track the big problems, but not the little ones that are the precursors to the big ones. This is one key way to find failed safeguards – go out and look for them! One might argue that this is the purpose of audits, and I would agree. My experience has taught me that sampling is not enough however when it comes to stopping goofs for good.

Evaluating safeguard effectiveness can achieved via surveys of many types, criteria-based observation ala Toastmasters, and sound root cause analysis of human error. Remember, most people are not measuring the effectiveness of their existing safeguards. How much are you like the company that was using fourteen different safeguards to keep people safe, but did not measure the effectiveness of any of them directly? How error-free do you really want to be? Could you become even more error free, at a lower cost, by improving safeguard effectiveness?

I don’t want you to take your work family up exposed rock faces without the right mix of safeguards. I eventually learned from my unsafe vacation act. I am blessed that I did not have to learn such lessons the hard way. Please try to avoid the classic error of assuming all of your safeguards work as intended. By evaluating safeguard effectiveness and addressing the gaps you find, I am confident that your process error rates will decline as well.

LEARN More: Process Improvement Strategies

Keep improving! – Kevin McManus, Chief Excellence Officer and Systems Guy, Great Systems

If you would like more information about the improvement tools and systems I have to offer, please send me an e-mail at kevin@greatsystems.com.

By | 2017-04-10T08:23:39+00:00 March 16th, 2016|root cause analysis, Safety Systems|Comments Off on Evaluating Safeguard Effectiveness

About the Author:

Kevin McManus serves as Chief Excellence Officer for Great Systems! and as an international trainer for the TapRooT® root cause analysis process. During his thirty five plus years in the business world, he has served as an Industrial Engineer, Training Manager, Production Manager, Plant Manager, and Director of Quality. He holds an undergraduate degree in Industrial Engineering and a MBA. He has served as an Examiner and Senior Examiner for the Malcolm Baldrige National Performance Excellence Award for eighteen years. Kevin also writes the monthly performance improvement column for Industrial and Systems Engineering magazine, and he has published a new book entitled “Vital Signs, Scorecards, and Goals – the Power of Meaningful Measurement."
Show Buttons
Hide Buttons
Translate »
X