Every safety or operations leader has faced a familiar fear, the kind that is triggered when an incident reoccurs, even after a “thorough” investigation. Everything seemed to be resolved, the findings felt accurate, and actions were signed off. Yet somehow, the same problem returned. Could it be bad luck? Or could it be the ghost of an incomplete investigation coming to haunt you?
Even experienced teams can fall into traps that distort findings and bury the truth. These traps aren’t born of carelessness; they stem from human psychology and process shortcuts that quietly shape the outcome.
At COMET, we often recognise this pattern: well-intentioned investigations that fail to ask the right questions, analyse the right evidence, or challenge the right assumptions.
This Halloween, it’s worth taking a closer look at the things that truly haunt investigations and how to stop them from coming back.
Cognitive and Human Factor traps
Even the best investigation frameworks can be undone by human bias. Below are some of the most common traps we see across teams and why they matter.
- Confirmation bias: Teams latch early onto a theory, then gather only the evidence that fits. It seems swift but locks out deeper understanding.
- Proximal bias (“person in the seat” effect): Attention zooms in on the person who made the mistake, while the broader conditions that shaped that action go unchecked.
- Normative language and hindsight judgments: Saying “they should have done X” ignores the real context at the time. That makes the report feel like a verdict, not an investigation.
- Mechanistic reasoning: Treating incidents as simple “component broken” or “human error” misses the tangled system interactions that enabled that failure.
Process and procedural pitfalls
A strong process is your safety net. When it weakens, accuracy and trust follow. These are the procedural pitfalls that most often derail investigations.
- Undefined scope: Without clear boundaries (what we’re investigating, when, why) the inquiry becomes unfocused and ineffective.
- Evidence contamination: Handling physical or digital data without secure chain-of-custody weakens reliability. Once compromised, the investigation is compromised.
- Tainted witness testimony: When witnesses share information before interviews, individual views become muddied, reducing the value of each account.
- Premature closure: Time pressure or leadership expectations rush conclusions, leaving pockets of uncertainty unexamined.
- Ignoring systemic factors: Blaming an individual without digging into system design, organisational processes or training issues guarantees repeat failure.
How to break the cycle or repeat failure
- Define your investigation’s scope up front: what happened, when, who, and what you will examine.
- Protect evidence from contamination: physical items, digital logs, witness statements all deserve secure handling.
- Challenge your assumptions: look for data that disproves your initial theory, not just supports it.
- Use a substitution test and ask: “Would someone with similar skills, in the same conditions, have made that decision?” If yes, it’s probably not just individual fault.
- Focus on system design: look beyond the individual to how policies, training, tools and environment combined to create the incident.
- Give the investigation time. Rushing to finish may feel efficient but creates the perfect setup for the incident to return.
The COMET way
At COMET we’ve built our investigation and root cause analysis toolset around two core objectives: 1) keeping investigations structured and unbiased, and 2) turning incident data into prevention insight. 
Here’s how we deliver on those objectives: 
- Coded taxonomy: Every investigation uses the same coded classification system, meaning root-cause data is consistently captured across incidents, teams and sites. This allows trend-identification that casual, narrative-only approaches miss.
- Bias-resistant workflow: The platform guides investigators through a structured framework designed to expose and reduce common errors such as confirmation bias, mechanistic reasoning and premature conclusions.
- Rich analytics & system-level insight: Because data is codified and centrally collected, organisations can move beyond single-incident reviews to access cumulative data analytics that enable them to recognise system failures, weak links, common failure modes and hidden patterns.
- End-to-end workflow: From incident capture through investigation, root-cause classification and action tracking, the entire process is visible and auditable, making it easier to demonstrate compliance readiness and shift from reacting to preventing. We do this through our Incident Management solution or by integrating with different HSE platforms such as: Intelex, Enablon, Cority and more.
- Human-factors focus: Rather than defaulting to “operator error” or “equipment failure,” the system builds human-factors and systemic context into investigations, so you understand how actions, environment, design, training and policies combined to produce the incident.
- AI-assisted support (COMET Companion): COMET’s built-in AI assistant provides real-time investigation guidance, configuration help, and multilingual support. It’s always on, always learning, and included for every user at no extra cost.
With COMET you gain access to tools and a structured process that enables you to understand what and why something has happened, what systemic changes are required, and how to stop it happening again. That’s how you move from incident management to a prevention mindset.
And since it’s Halloween, if you leave the hidden causes unexplored, remember that they don’t vanish. They lie dormant, ready to creep back when the conditions align. Using COMET means you can lock the door on those investigation ghosts for good.
Ready to learn more?



.webp)
