Skip to content

How do you use near-miss reporting?

We all know the importance of focusing at the bottom of the safety triangle if you want to prevent injuries rather than simply react to them. But how do you use near-misses reporting in your organisation?

Do you want near-misses to increase because they’re an opportunity to learn from events that could’ve caused harm? Or do you want less because they’re an indication that things aren’t right?

Lagging indicators

Traditionally, safety performance was measured primarily in terms of numbers of incidents/accidents reported with various rates being reviewed like AFR and LTIR. This is historical information that shows an incident occurred and corrective action is in place to try and prevent it happening again. However this doesn’t tell us much about how to prevent things happening in the first place.

These are called ‘lagging’ or ‘trailing’ indicators and report historical information that happened in the past. The output of this measure is reactive and takes place afterwards. It provides data about past performance but it doesn’t help predict future performance or help us understand how to prevent the next incident from happening. If we focus just on these indicators we’re always playing catch up.

Leading indicators

‘Leading’ or ‘upstream’ indicators are measures linked to actions that prevent future incidents like hazard reports. The outputs from these measures are proactive. They can be passive or active in nature and have a very quick, short-term effect like dealing with unsafe conditions, observing behaviours or longer-term as in resourcing safety training.

'Proactive' and 'Reactive' written on a chalkboard

To improve on just using lagging indicators safety teams push the focus towards near-misses and the reporting of these events that occurred but didn’t cause an injury. Implementation of a robust near-miss reporting system often leads to great success in identifying potential safety deficiencies and poor practice especially when teamed with good feedback and engagement in the improvements side by everyone concerned.

The difficulty comes in sustaining this system as a proactive leading indicator when people feel there’s nothing left to sort out because all the obvious stuff has been done. It can be viewed negatively too, with numbers reported often tied into targets to get people to report them so it turns into more of a tick in the box than a true learning opportunity.

When accidents/incidents are drastically reduced, finding ways to keep people focused on the potential for accidents to happen is even more important because it’s all too easy for people to become complacent and take their eye off the ball. Near-misses happen all the time so it’s a good way to learn without the need for anyone to suffer. Use them as part of a safety discussion – try starting with driving near-misses and you’ll soon have people admitting near-misses they’d never normally report.

Some safety teams however see near-misses in a different light. They think the only difference between an accident and a near-miss is the lack of injury. From this perspective you could say a near-miss is effectively the same thing as an incident but someone didn’t suffer any ill effect from it, and as we know this is down to chance.

So for some organisations the near-miss is seen as a lagging indicator not a leading one because if they’re improving their safety performance they should get less near-misses reported, not more. They see unsafe conditions and hazard reporting and observing safe and unsafe behaviours as leading indicators that they can encourage more of.

What’s your stance on the use of near-misses as indicators within your safety performance system? Do you want to see more or less if you’re getting better?

Find out more