Measuring Safety, part 1 – The Relevance of Outcomes

The other day I received another self-praising message in my news-feed, one of Norway’s major construction contractors was celebrating their one year anniversary since their last lost time injury incident, making their LTIF now “Zero”.

While reading James Reason’s latest book, “Organisational Accidents Revisited” I noticed the quote: “The road to Hell is paved with falling LTI frequency rates”,  illustrated by major cases like DWH and Texas City.

I believe it is good when no one has been injured as a consequence of their work. At the same time, this has again turned my attention to something which has been keeping me busy for many years;

why are people so focused on outcomes, when they mean so little in terms of improvement, especially in safety?

 Obsessed About Outcomes

When an incident or accident happens, it’s generally the consequences that attract great attention. From a humanitarian and empathetic perspective this is natural. It’s a tragedy for all those involved, and they are in need of comfort from the pain, loss and, at times, life-altering experiences. When there have been severe outcomes, regulators and enforcers get involved. In some cases, criminal investigators are dispatched to investigate if laws were broken, or if it is a case for the justice system to address.

Included in this list of spectators is the media, general public and politicians who are often interested in the “sensationalism” aspect of the event; the pictures of damage, a dramatic story with victims, survivors and eyewitnesses, or an opportunity to leverage the situation for ones own political gains.

Many safety professionals tend to focus a majority of their attention on outcomes because it is how we traditionally measure safety, or have been taught it is the best practice to follow. We register fatalities and injuries in varying degrees of seriousness. Interestingly, even when organisations register information about other safety-relevant events, like material damage, near-misses and observations, or pro-active reportables the information regarding outcomes is viewed as the most reliable or complete. I believe this to be, because (1) regulations often require we register and report on them, or (2) they are necessary for compensation and insurance claims.

But why?

There are many reasons why people pay so much attention to outcomes; there are also serious professional arguments why this is not the best practice to continue.

As this is a very fundamental point we will not speak extensively on this, however, I do believe it is important to keep in mind . We need to step back to consider our numbers might be looking good because the organisation has had an enormous amount of “luck”, or because incidents are not being registered or reported. What we need to consider is by looking only at outcomes, you are getting –at best — an incomplete notion of safety. Or rather, the absence of safety.

There is also the effect of randomness to consider. We will consider, for this portion of the discussion, all accidents are reported and registered. We may believe their importance is determined by the severity of the outcomes. I would challenge this view point as incorrect.

Let me expand on this by an example.

For many years I have worked in the railroad industry, where I learned “Level Crossings” are among one of highest risk places, on the network, for accidents to regularly occur. When a car is hit by a train on a level crossing, chances are that the consequences are very bad for the people inside the car. The number of fatalities is usually not related to the causes for the accident, but depends on random factors such as, what position the car had on the level crossing at the moment of impact. Is it hit full in the side, most likely killing passengers on impact, or does the train just hit the tail end, spinning the car around but not injuring people in a major way? As well, the number of people in the car; was it just one person driving to work, a parent with children in a minivan, or a full school bus?

The events (incidents, accidents) are the basically the same, but the outcomes are wildly different, due to some minor factors, often sheer random ones. So we can safely conclude that from a prevention, learning and improvement point of view, the consequences are definitely the least interesting part. As David Woods and co-authors say in their book “Behind Human Error” outcomes are only loosely connected to processes. It is therefore better to focus on process and events than on outcomes.

Even though there is hardly any information for prevention in the outcomes, they are not entirely meaningless. Outcomes provide a sense of urgency which we perceive as easier to prioritize and allocate resources, when a fatality has occurred.

A little journey through time

Are these new insights? By all means, no.

We can start with a flashback to the 1930s. That most criticised of all safety-authors, Herbert W. Heinrich himself, pointed out many, many decades ago it is the potential that lies in an event which is important. By reacting on the accident, regardless the actual outcome one could create improvement. Besides, usually there are more accidents than outcomes, so there are more opportunities to learn.

Attack the accidents, he said, and the consequences will take care of themselves. Heinrich made a clear difference between events and their consequences. Think of his work what you will, his ideas around the subject were ground-breaking at the time and stand until today. Oddly, these particular thoughts of his find only little reflection in most safety metrics, even eight decades later.

Fast forward to the 1970s. Barry Turner in his classic book “Man Made Disasters” looks at earlier disaster research and concludes it is of little use in the way of preventing future disasters because there has been so much focus on outcome. The number of fatalities of disasters is a function of population density, not what caused the disaster.

It may be worthwhile to use this factor in the light of exposure and fragility. Living on a volcano may give certain benefits, but when it erupts there is a high risk of fatality. An example being the history of Pompeii.

This knowledge has been around for decades in safety, however, it appears to be not understood as people still focus heavily on outcomes.

A distorted view

Outcomes also have another negative effect because they affect how we look at a case, which is known as “outcome bias”.

In “Behind Human Error”, I would suggest to review chapter 13 as it is devoted to hindsight and outcome bias. David Woods states;

“Knowledge of outcome biases people’s judgement about the processes that led up to that outcome. We react, after the fact, as if knowledge of outcome was available to operators as well, and wonder why they didn’t see it coming. This oversimplifies and trivializes the situation confronting the practitioners, and masks the processes affecting practitioner behaviour before the fact. These processes of social and psychological attribution are an important obstacle to getting to the second story of systematic factors which predictably shape human performance.”

When outcomes are really horrible, we believe the causes must have been bad, perhaps even intentional or from being reckless.

The truth often is the people involved in the case did not intend to produce an unsafe outcome. If they had the knowledge that their process would lead to a fatal outcome, they would have use this information to modify their handling of the problem.

David Woods et al., mention a couple of strategies of dealing with being biased by outcome. It is important to realise information about the outcome is irrelevant to the judgement of the quality of the process that led to that outcome. Good decisions can lead to bad outcomes, and good outcomes may still occur despite poor decisions.

Trying to ignore knowledge of the outcome, or alerting people about the biases are not very successful strategies to neutralise hindsight and outcome bias. What is more successful is to include having people consider alternatives to the actual outcome, or to ask people to list reasons both for and against each of the possible outcomes, often referred to as the “Devil’s Advocate” approach.

There are many reasons to move away from a focus on outcomes. That doesn’t mean that we shouldn’t react on accidents. Of course, we have to learn from them, but we don’t have to wait for someone to get hurt. In the example mentioned before, when a car swerves around the level crossing barriers to make it over before the train comes, is it really the impatient driver, or are there things in the design that should be improved?

We can use a near miss event to learn, we do not have to wait until that next fatality.

More fundamentally, we should stop seeing safety as an outcome (i.e. a number of fatalities or injuries). Instead, we should try to adopt more fruitful ways of looking at safety.

In the next part of this mini-series, we will have a critical look at the phenomenon of SIF (Serious Injuries & Fatalities).

Carsten Busch is a self-declared Safety Mythologist and author of the well-received book Safety Myth 101. This book collects 123 (and then some) of Safety Myths. Crisp and compact discussions address weaknesses of conventional safety ‘wisdom’ and give suggestions for alternative approaches and improvement. An entire chapter of the book is dedicated to measuring safety and indicators, another deals with learning from incidents.

http://www.mindtherisk.com/the-book

10 thoughts on “Measuring Safety, part 1 – The Relevance of Outcomes

  1. Carsten – very thoughtful article, and I like Ron’s comment.

    Not allowing ourselves to be swept along by populist flow, and going upstream from results to process provide opportunities to improve. Your example of Heinrich demonstrates why we should go back to source (read Heinrich) and think for ourselves.

    Apart from getting the facts straight, one of the things we should think about is why Heinrich’s ideas have endured for so long if, as some suggest, he was so wrong! One answer is recognition of the importance of learning from near misses.

    • Good to hear from you Nick.

      Indeed we should regard both old and new sources. Old does not equal bad and new is not always good, or even better. We have to assess things on their own value, which is easier with old stuff of course because we have had better time to test them. Alas, of course, some old things are not useful but have resisted the test of time nevertheless, for example because it’s business or convenient (I’m thinking the 88% thing…).

      Interesting I have been increasingly looking at older stuff recently. Turner, Allison, Simon… And also Vaughan and Snook are around 20 years old… Valuable stuff.

  2. Thanks for the article, Carsten. Your thoughts about outcomes reminds me of an incident in which a worker slipped on an icy parking lot. How many times does this happen? This one time resulted in a knee injury and lost time. The client focused on the lost time portion forgetting entirely that the parking lot belonged to them.

  3. Carsten – in the last century Dr W Edwards Deming was showing how to transform management: ideas that many of us believe are still valid and useful today. Among other things Deming advocated charting data in a way that helps managers improve their organization. But, just as you warn concerning reliance on outcomes, he stressed that use of data requires knowledge about the different sources of uncertainty.

    Interestingly, some like to quote Deming as saying “if you can’t measure it, you can’t manage it”. Interesting because that is a short extract taken out of context. The full quote – available for those who take time to go back to source and read his book The New Economics – says the opposite: “It is wrong to suppose that if you can’t measure it, you can’t manage it – a costly myth”.

  4. Pingback: Measuring Safety Part 2: Serious Injury Fatality - Safety Management

  5. Here I am reading this blog four years later! Intention can really be completely different than impact (look at the first comment). To me, your intention here was not to ask people to ignore outcomes (that would be swimming upstream to me) but rather learn from it concurrently with the other independent yet interactive systems that contribute to the outcomes. On to Parts 2 and 3!

Leave a Reply

Your email address will not be published. Required fields are marked *