What is safety?

Is it an absence of failures – usually recognised by injury, damage or ill-health? Or is it a state of continued success which ensures everyone goes home in one piece? These might sound like two sides of the same coin, yet there’s a subtle distinction.

The first represents a lack of negatives, whereas the second is the presence of positives. And that’s significant, because when it comes to frame of mind, your choice of interpretation will affect the kind of attitude you encourage in your workplace.

Suppose you go all month without a single accident, does that mean you’re safe? It might do. But it might also mean you’ve been lucky.

The level at which you set your measure also determines your safety performance. If you only count the fatalities or really serious injuries, then I guess most organisations will have a really good performance month after month. If you add less serious injuries into the measure, some of us might see our performance decline.

What if you were to add in all the incidents that had a genuine potential for someone to be harmed?

By the sheer ratio of numbers, most people’s performance figures would soon start to look pretty disappointing. The performance appears to be very variable without actually doing anything different other than changing what we count. This sustains the rather miserable, negative attitudes that safety has traditionally had attached to it.

Measuring performance in this way becomes particularly difficult when you get better, and the numbers get smaller and smaller. Eventually you won’t have anything left to measure. You could be forgiven for thinking ‘We didn’t have any serious injuries last year and none again this year. Great!’

But have you got better or just stood still?

Measures are open to abuse too

Imagine this: someone trips over some rough ground and lightly sprains their ankle (already sore from the weekend’s football…) They take time off work to let it rest and get better. Is this really worth flagging up to board level? Is there really any great learning from this low-level incident?

Some organisations might be tempted to gently massage the figures too.

How many (RIDDOR) reportable injuries have been avoided by getting the injured worker back in to the workplace on ‘light duties’? Yes, it’s better to have them in the workplace doing something – but the fact that they’re not able to do their normal job means that it is still reportable to the enforcing authority (albeit now that the time frame has lengthened this is less of an issue).

In one organisation I worked for, I saw a great improvement in our lost day figures one year – not because we got so much better at keeping people safe, but because our managers got better at getting people back into work sooner!

None of this should be about the numbers

At JOMC, we don’t encourage people to recognise, report, record and track incident data just so we can compare numbers. We do it so that we can learn valuable lessons in how to keep everyone else free from injury in the future.

People need to recognise that there are equally good lessons to be learnt from all the times that a job or task is completed successfully. Like, why was it successful? How did we accomplish it on time without any problems? What did we do to overcome any potential problems along the way? How can we make sure we are equally successful next time?

Success is a great deal more common than failure if you think about it, so there are many more opportunities to learn – plus nobody was hurt!

So, it’s time to start using more positive, leading indicators like behavioural compliance, culture benchmarking or engagement rates. We’ll still have to report some lagging indicators – but let’s at least make them useful, and count all the high-potential incidents (whatever the actual outcome).