Yes, You Should Argue with Success: Why You’re Luckier Than You Think You Are

Many organizations operate according to the maxim “you can’t argue with success.” In other words, if it works, it’s worth doing. You might have heard this in another guise when someone says, “I don’t care how you do it, just get it done.” 

Graphic of a frontline worker operates machinery while another conducts a safety inspection

But what if you should argue with success? What if what you think is success is simply sheer luck that the small failures that take place unnoticed every day simply haven’t yet aggregated into an inevitable cataclysm that results in injury, financial damage, or loss of life? 

Humans are not particularly good at extrapolating possible negative future outcomes from the information they learn from past events. We’re prone to assuming that if something has gone well when we performed the task on a previous occasion, that it will always go well. As a consequence, we get an inflated sense of our abilities and our skills at handling other similar situations, and the level of risk with which we are comfortable rises commensurately. This is what psychologists refer to as the “outcome bias,” in which we make decisions based on the preferred outcome and ignore the contextual mitigating factors that might produce that outcome. In reality, many positive outcomes are simply the result of sheer luck in defiance of accumulating problems that escape attention.  

For example, consider a worker who uses a piece of machinery on a regular basis. The machine hasn’t had a scheduled maintenance check for more than two years, but it has not given any indication of there being any problems. Each day that the worker uses the machine, they engage the outcome bias by assuming that the machine won’t fail today because it didn’t fail yesterday.  

By neglecting regular maintenance, the worker is unaware of the accumulating deviations that are gradually building towards failure. Further, the worker fails to acknowledge the endless variables that exist outside the mechanical system that could be contributing to this run of luck, such as the different levels of stress under which the machinery is placed for different tasks, environmental factors such as temperature or humidity, or the amount of time the machine is in operating mode. When these variables fall below the threshold for disaster, the outcome is fine; but when even one of those extends pass that threshold, disaster can strike. In other words, outcome bias blinds us to the reality that everything seems fine until it isn’t. 

In situations like this, every moment that machine is in operation is a near-miss. We’ve become accustomed to thinking of near-misses as obvious events in which failures occur but damage or injury is minimal: an explosion that doesn’t hurt anyone; a fire that is quickly doused; a piece of out-of-control machinery that is eventually tamed in a manufacturing environment. In reality, most near-misses are the tiny failures that take place unnoticed throughout a complex system and build up over time until they become disasters. 

In other situations, we might even notice anomalies in the system but then ignore them because they don’t have any apparent impact on the outcome. As long as the outcome continues to be positive, various anomalies are tolerated. This complacency is known as “normalization of deviance.” The danger is that every time an anomaly-filled process doesn’t fail, it is essentially a different event with multiple, unknown variables that prevent disaster from happening. In other words, every positive outcome is the result of luck that might not have any influence on whether future events turn out the same way. 

The two space shuttle disasters are excellent examples of outcome bias that results from normalization of deviance. In the Challenger disaster, the O-rings that led to the cataclysmic fuel leak were a known issue for some time but had never been flagged as a constraint on launching the shuttle because they had not failed or caused any serious mechanical failures. In the Columbia accident, foam falling from the fuel tanks had taken place several times before the final launch, but each instance had been basically harmless and had caused no damage. As a result, no one gave this seemingly minor equipment failure much thought. Each piece of foam was therefore a near miss that no one noticed, and which eventually led to the damage to the wing of Columbia that resulted in the loss of the space craft and the deaths of all seven crew members. 

The Deepwater Horizon disaster provides another example of the aggregation of minor errors throughout a complex system over time. While Deepwater Horizon’s BOP (blowout preventer) has assumed the burden of culpability in popular culture and in high-level investigative reporting, the truth is that British Petroleum, Haliburton, and Transocean normalized countless instances of deviation and reinforced their outcome bias by focusing on profit and cost-cutting over process efficiency and a culture of safety

What can you do to prevent outcome bias and avoid disaster? Consider the following (Gino et al, 2009): 

  • Remember that outcomes are events that result from complex interactions between multiple agents in a specific context. To understand the outcome, even a successful one, you must understand these complex interactions. Positive outcomes are not the result of individual decisions or actions. 
  • Data that can be collected, analyzed, and acted upon in real time provides the key to turning lagging indicators into leading indicators and preventing accidents before they occur. IIoT (Industrial Internet of Things) devices can help uncover the deviations that pass unnoticed and contribute to outcome bias. 
  • Recognize that people rely on the outcome bias more frequently when they are in high pressure situations. When under pressure, people depend on heuristics and biases that ease the cognitive burden produced by information overload, which leads to poor decision-making. 
  • Use risk-based thinking to understand and anticipate uncertainties and variabilities in your processes. 
  • Constrain outcome bias by preparing for worst case scenarios and understanding how they might come about. The “pre-mortem” strategy, in which you imagine that the worst-case outcome has occurred and then work backwards to determine how it happened, can be a valuable tool here. 
  • Force decision-makers to justify high-risk decisions to ensure that everyone has a thorough understanding of the complexity of the context. 

Outcome bias might be a natural human proclivity, but it doesn’t have to shape how we think. Data, risk-based thinking, leading indicators, and systems thinking can help you understand the places where deviation and variability are aggregating so you can prevent disaster.  

Gino, F., Moore, D. A., & Bazerman, M. H. (2009). No harm, no foul: The outcome bias in ethical judgments. Harvard Business School NOM Working Paper, (08-080). 

Robson, D (2019). The bias that can cause catastrophe. BBC. Retrieved 2019.12.17. https://www.bbc.com/worklife/article/20191001-the-bias-behind-the-worlds-greatest-catastrophes 

Savani, K., & King, D. (2015). Perceiving outcomes as determined by external forces: The role of event construal in attenuating the outcome bias. Organizational Behavior and Human Decision Processes, 130, 136-146. 

Tinsley, C. H., Dillon, R. L., & Madsen, P. M. (2011). How to avoid catastrophe. Harvard Business Review, 89(4), 90-97. 

Walmsley, S., & Gilbey, A. (2016). Cognitive Biases in Visual Pilots’ Weather‐Related Decision Making. Applied Cognitive Psychology, 30(4), 532-543. 

Leave a Reply

Your email address will not be published. Required fields are marked *