The short answer is no. Or at least, not yet.

Artificial intelligence continues to evolve rapidly, delivering powerful solutions across industries. In many business functions, AI-powered automation can streamline routine tasks, analyse vast data sets, and improve efficiency. But when it comes to safety-critical decision-making, investigations, and root cause analysis, there are clear limits to how far AI can and should go.

At COMET, we believe AI can support human investigators but not replace them. The risks of over-automating safety and incident management decisions are too great, especially in high-risk industries where lives, operations, and reputations are at stake.

The promise and the danger of fully automated decision-making

In some sectors, new tools have emerged claiming to use AI-powered image recognition or pattern matching to generate root cause analysis results automatically. The idea of uploading a report, image, ordataset and receiving an instant RCA may sound appealing on the surface, but it introduces serious risks:

  • Context blindness: AI lacks full understanding of operational, environmental, or cultural context. Root cause analysis often involves complex human, procedural, and organisational factors that go far beyond data points.
  • Hallucination risks: Generative AI models can produce outputs based on flawed or incomplete logic, leading to oversimplified or entirely incorrect conclusions.
  • False confidence: AI-generated results may appear authoritative, making users more likely to accept flawed analysis without questioning the underlying     reasoning.
  • Missing the systemic factors: High-risk incidents often have multiple contributing factors across systems, processes, human factors, and organisational culture. No algorithm can fully capture these nuances.

The consequences of over reliance on AI in safety-critical situations are not hypothetical. In 2024, an AI-controlled "SmartTram" in St. Petersburg, Russia, experienced a brake failure during a test run. The system failed to react appropriately, causing the tram to plough into a crowd of pedestrians, resulting in multiple injuries. The incident raised serious concerns about allowing AI to operate without sufficient human oversight in public safety scenarios.

Why human-led investigation remains essential

True root cause analysis requires trained investigators whocan:

  • Examine evidence with professional judgment
  • Consider wider organisational and cultural influences
  • Conduct interviews, evaluate conflicting accounts, and assess credibility
  • Apply investigative frameworks correctly to complex, real-world situations
  • Validate findings and recommendations through experience and cross-functional collaboration

In safety-critical environments, these human elements are irreplaceable. Software alone cannot replicate professional investigative thinking.

How COMET applies AI the right way

At COMET, we fully embrace the value of AI but only where it enhances, not replaces, human expertise. Our AI-powered products are designed to support better human decision-making:

  • COMET Signals, our AI Data Analytics solution, helps organisations process and analyse large datasets to identify patterns, trends, and emerging risks across operations, investigations, and audits. It surfaces areas for further human review, not final conclusions.
  • COMET Companion uses AI to offer real-time support in various languages and knowledge reinforcement. Investigators remain fully in control of the analysis process.
  • Our core investigation, RCA, and audit modules remain fully human-led, ensuring critical thinking, context evaluation, and expert judgment drive outcomes.

This deliberate approach reflects our core belief. AI should augment professional expertise, not automate safety-critical judgments.

The ethical responsibility of balanced AI adoption

While incidents like the St. Petersburg tram accident highlight the dangers of misapplied AI, it is equally true that AI has already saved lives and continues to transform industries in positive ways. From predictive maintenance preventing equipment failures to AI-supported diagnostics aiding doctors in early detection of disease, the technology offers tremendous promise.

We know that AI will continue to evolve, and its role in supporting high-risk industries will grow. But even as its capabilities expand, certain decisions should always remain in human hands. Safety, investigation, and root cause analysis demand expertise, critical thinking, and accountability that only trained professionals can provide.

Human-led processes, supported by the right tools, ensure organisations maintain accountability, transparency, and defensible conclusions. Investigators remain responsible for understanding the full story, not simply accepting machine-generated outputs.

AI has enormous potential, but only when carefully applied alongside professional expertise. In safety, there are no shortcuts. At COMET, we build technology to serve people, not to replace them.

Learn more about how COMET combines intelligent software with structured human-led investigation, training, and process embedding here.