“If AI influences the decision, who owns the outcome?”
AI i accelerating how organisations build, test and deploy new ideas, compressing timelines that once stretched for years into a matter of weeks. That speed creates genuine opportunity. It also introduces new forms of exposure, particularly for businesses operating in high-risk, safety-critical or heavily regulated environments.
That tension is why we called this episode One technology, two truths. AI can unlock efficiency, creativity and competitive advantage. At the same time, it can blur accountability, outpace governance and create overconfidence in systems that are still probabilistic by nature. Both of those realities exist at once. Ignoring either is where problems begin.
In this episode of The Risk Factor, Arrash Nekonam, CTO at COMET, is joined by Jaye Deighton, Global Head of ICT & Innovation at Peterson Control Union, and tech entrepreneur Steve Shearman for a candid conversation about what AI adoption actually looks like inside established organisations.
This is not a hype-driven discussion about what AI might do someday. It is a grounded look at what is happening right now.Some of the themes they explore include:
• The pressure on organisations to “do something” with AI
• Whether speed is quietly outpacing governance
• The tension between entrepreneurial experimentation and enterprise control
• Why messy data can quietly undermine confident outputs
• And how accountability can become blurred when machines influence decisions
They also challenge some of the most common assumptions surrounding AI. It is not magic. It is not infallible. And it does not remove the need for human expertise. If anything, it demands more discipline, more judgement and clearer ownership than ever before.
Jaye speaks openly about the realities of deploying AI inside a large global organisation, where ambition from leadership must be balanced with data quality, policy and operational risk. Steve shares the perspective of an entrepreneur building rapidly in a fast-moving market, while acknowledging the dangers of building products that can quickly become obsolete.
During the “Root of the Matter” segment, the discussion sharpens. Arrash, Jaye and Steve examine what organisations risk getting wrong if they treat AI as a shortcut rather than a capability to be managed. They question whether some companies are mistaking experimentation for strategy, and whether confidence in AI outputs sometimes exceeds genuine understanding.
They return to a simple but uncomfortable idea: if AI shapes the recommendation, who ultimately owns the consequence?
If you are shaping strategy, approving investment, or deploying AI in operational settings, this discussion may challenge how confident you feel about the pace and direction of your own adoption.




.png)

.webp)
