Can safety performance be benchmarked? And if so, how?
That question comes up often in our conversations with clients and industry contacts across high-risk sectors. Whether it’s ports, terminals, energy, construction, rail, or utilities, companies are wrestling with it right now.
Organisations know they need to show commitment to safety. They know their data holds value. And many are starting to realise that how they measure performance might say just as much as what they measure.
But there's a difference between collecting data and understanding it. And that’s where benchmarking starts to get interesting.
We’re sitting on more data than ever, but what are we doing with it?
Most organisations already track lagging indicators. Many are now also collecting leading indicators. Near misses. Safety observations. Audit scores. Behavioural data. Some are even experimenting with AI. On the surface, it looks impressive. But beneath that, many still can’t answer a simple question:
How safe are we, really, and how do we know?
We hear the same pattern from multiple organisations, across different sectors. Large volumes of data, but no consistent way to translate it into clear insight. No agreed structure for investigations. No shared definitions for what a root cause actually means. And no visibility on whether lessons learned in one terminal are being applied in another.
That’s not a technology problem. That’s a systems problem. And until that’s addressed, benchmarking will remain guesswork.
The fear of missing out is real
Some companies are already getting ahead. They are digging into years of incident data and identifying patterns others can’t see. They are spotting repeat causes that never made it onto dashboards. They are tracking the quality of investigations, not just the number. And they are using that information to reduce repeat issues, not just count them.
These are the same companies setting internal benchmarks across regions and business units. They are not waiting for a perfect external comparison. They are starting with their own data and asking better questions of it.
And that is where the shift happens. Because benchmarking should not start with "am I different". It should start with "why am I different".
Not all comparisons are created equal
Benchmarking is only meaningful when the data is consistent, the language is shared, and the behaviour behind the numbers is understood.
If one team investigates thoroughly and another just ticks the box, their incident numbers may look the same, but what they tell you couldn’t be more different. If one site encourages open reporting and the other suppresses it, who looks safer on paper?
The point is simple: data without context leads to false confidence. And false confidence in safety is dangerous.
The role of leadership in benchmarking
The drive to benchmark well has to start at the top. Not just as a reporting requirement, but as a mindset.
When senior leaders value honest reporting, seek out intelligence, and act on it visibly, it changes behaviour across the organisation.
You get better data. More insight. More learning.
That culture does not come from a dashboard. It comes from leadership that wants to know the truth and is prepared to act on it.
If you want to benchmark, fix your foundation first
Before you benchmark anything, fix the basics
- Are your investigations structured and repeatable
- Do your teams speak a common root cause language?
- Can you trust the data that feeds your indicators?
- Are lessons being captured or just closed out?
Until those foundations are in place, comparing data across teams or with peers is premature. Start with internal benchmarking. Compare sites. Compare thinking. Compare the depth of learning, not just the number of forms filled in.
The real value isn’t in comparison, it’s in clarity
As said by Chris Ingham, EHS Director at Peel Ports Group during a panel session at TOC Europe two weeks ago, “Safety is not a competition in our sector. Ports don’t win by having the lowest LTIs. They win when they learn faster, fix problems properly, and help their people go homesafe, every time.”
That only happens when the data you collect leads to clear insight, which turns into better decisions.
What gets measured should matter
This is why COMET exists. We help high-risk industries improve the quality of their investigations, their learning, and their outcomes. Structured frameworks. Consistent language. Behavioural clarity.
And now, with COMET Signals, we’re helping organisations take the next step. Using AI to detect unseen risks across vast datasets. Think of it as an extra set of eyes, constantly scanning your incident and audit records to find patterns humans might miss. Not replacing people. Supporting them. Giving leaders the signals they need to act early, not late.
When your data is clean, structured, and analysed properly, benchmarking stops being a guessing game. It becomes a strategic tool.
Want to benchmark better? Start by improving the way you learn.
Start by improving the way your organisation learns.
See how COMET and COMET Signals help teams turn safety data into meaningful action, not just performance metrics.