Measuring Safety Part 3 – Serious Injury Prevention – S is for Subjective

Serious Injury Fatality

Serious Injury Prevention – S is for Subjective

In Measuring Safety Part, 1 we reviewed the drawbacks of focusing solely on the measurement of safety outcomes absent understanding and tracking operational processes and events that are predictive of a safe workplace. In Part 2 of the series, we dove deeper into the implications of this thinking by reviewing “Serious Injury Fatality” (SIF). In this Part 3, we look at the subjective nature of serious injury and fatality prevention recording and whether SIF is indeed the better approach to take.

Building a SIF Safety Triangle addresses previous serious safety prevention framework shortcomings, but it also introduces new issues. Our next question: Who decides upon the categorization of processes and events that have the potential for serious injury and fatality? Also, how is this categorization done?

It seems to be relatively easy to define some categories that separate SIF from the non-SIF phenomenon. As a starting point, Dominic Cooper describes a five-level classification system that borrows from methods commonly used in hospital emergency departments.  Donald K. Martin provides an alternative in which companies register their high-risk situations, examine management controls, and determine if the leadership and culture work to prevent or allow these to continue. There are surely several other methods including using risk matrices. The challenge with multiple methodologies is in building broader benchmarks of understanding by comparing SIFs across organizations.

Another major question is: How do you reliably assess the potential for a SIF without the assessment becoming arbitrary and subjective?

We need to think not only about possible consequences but also about the probability or likelihood of an event. I can think up a fatal accident for almost any event but where does one draw the line? An understanding of probability is helpful here but certainly not a perfectly clean solution to categorization.

As outlined in Saving Lives and Limbs with Big Data “Easily illustrated, if someone slips and falls under most circumstances they have a high probability of minor injury (e.g., bruising), a modest but lower likelihood of serious harm (e.g., broken bones), and a slight (but real) risk of fatality (e.g., a severe head injury).” Unfortunately, I have been witness to low probability/high consequence events that have occurred.

One of the examples I encountered while reading up on SIFs mentions a case of slipping and falling. When this happens on a rail wagon, it is considered a SIF which makes sense, given present hazards, while slipping and falling in the parking lot is rated as low and non-SIF. One can question this categorization, however, because slipping and falling in the parking lot could be fatal if you hit your head or severe if you break a hip. An example of this happened several years ago. A colleague of mine slipped on an ice-covered walkway in the workshop and fell. He landed on his arm which he initially shook off like most people would have done. My colleague suffered pains for a long period, had an operation and registered sick leave. The incident turned into a major life lasting injury which reduced his arm use for many years.

This incident turned out to have been the most serious accident that year.

Staying with the slips and falls, during my time at the Norwegian railroad administration I oversaw dozens of incident reports where passengers slipped on the icy platform. A few led to minor injuries, but most were simply a nuisance to the victim. None would have made it into a SIF category. I believe this was rather short-sighted because an individual could have had a complication after the fall, such as a broken a hip. Another example is the event where a 16 year old girl slipped onto the subway tracks and tragically died. From the photo it would appear the ice on the platform would be a factor to consider.

Photo: Fouad Acharki / NRK

When low probability/serious injury events occur, it begs the question of whether all such prior and future events should be re-categorized into SIFs.

Counterfactual Reporting

In the above portion of our discussion, we have addressed the “how to categorize” problems. In the remaining, we will review a ”who decides” problem.

From our work experience, we know different people judge cases differently. As illustrated above, knowledge of real-world cases may lead to another category, making events even more subjective. The issue of inter-rater reliability mentioned in the Donald K. Martin/Allison Black paper not only applies to academic studies but also real world issues when managers, front-liners and safety people start categorizing cases based on their perspectives.

A colleague of mine conducted a research project at the multinational company where she was employed. Managers had to assess all incidents that had occurred according to a corporate risk matrix. The results showed individuals had a widely varying understanding of the various categories. An incident flagged as “red” by one manager or safety advisor was not necessarily “red” for another. Additionally, she found there was a very strong disincentive for managers to score incidents as “red.” The pursuit of better safety outcome numbers led to misclassifications such that many would score potentially high-risk incidents in the less severe “amber” category because these numbers did not have to be reported back to the corporate head office. EHS Professionals need to be mindful how the reporting and management of numbers impacts may SIF categorizations.

There is another odd thing. SIF programs require you to report counterfactuals: things that did not happen, but could have happened. Is that a bad thing? Not entirely, because thinking about the potential outcomes can help you to be aware of the risks you are running and how vulnerable you may be. Reporting something that did not happen is a bit silly, however, and as argued above, very subjective.

Useful or Useless?

It may be problematic, but I believe a SIF framework and approach is a useful tool for improvement. It pushes us to focus on precursors of incidents, not just outcomes. In its general application, focusing and measuring on high-risk precursors that are likely to lead to SIFs can contribute to reduce workplace fatalities. By contrast, focusing only on outcomes creates the opportunity for blind spots. It is a wise approach and in fairness, is one which Heinrich understood and shared many decades ago. Over time and consistent repetition, this thinking is now being adopted as mainstream.

A drawback is that SIF can be quite misleading when used as a tool for measurement. Some organizations use SIF as a metric. For example, in Statoil’s sustainability reports they remodel it to Serious Incident Frequency. I believe this is not wise because it may lead to another form of what Fred Manuele calls a “delusion.” That is, a positive SIF trend may give the impression of good safety even though it could mean the under-reporting of “reds.” Turning solely to subjective reporting opens the potential for this type of problem.

Although I see positive elements, , I am afraid that many SIF programs are only a slightly improved, rebranded and polished way to sell traditional safety programs, consultancy, and BBS systems.

Therefore, in many cases SIF= Old Wine + New Bottles. Nice try, but no cigar!

Thoughts and discussion welcome!

———————————————————

Carsten Busch is a self-declared Safety Mythologist and author of the well-received book Safety Myth 101. This book collects 123 (and then some) of Safety Myths. Crisp and compact discussions address weaknesses of conventional safety ‘wisdom’ and give suggestions for alternative approaches and improvement. An entire chapter of the book is dedicated to measuring safety and indicators. Another chapter deals entirely with learning from incidents.

http://www.mindtherisk.com/the-book

One thought on “Measuring Safety Part 3 – Serious Injury Prevention – S is for Subjective

  1. I think the biggest challenge is not in predicting or preventing SIFs, but how corporations report health and safety performance. The metric and the quantum of that metric drive management action and attention. In large organisations, by the time that SIF prevention or frequency performance data makes its way up from the front line team, all the way up to head office, it’s lost its meaning. Once the data reaches the top, if it’s not where it should be, management attention drive often misdirected action. I think that the data needs to stay team level, maybe at department level. The data has to be useful to the team, in their context, to develop resilience and improve performance.

Leave a Reply

Your email address will not be published. Required fields are marked *