From Assumptions to Assurance.

On paper, most organisations look safe and well controlled. Policies exist, procedures are documented, training is recorded, dashboards are presented to the Board and the leadership team, and certifications are proudly displayed. To any reasonable observer, the organisation appears to be operating with maturity and discipline.

History tells us a very different story.

Across major industrial accidents, investigations repeatedly conclude that organisational and management factors are the dominant contributors, rather than simple front-line mistakes or freak technical faults. Time and again, major disasters have occurred in organisations that believed they were protected. A cursory review of industrial disasters and catastrophes reveals a long list of high-profile incidents involving major organisations that all had documented systems, procedures and formal structures in place.

When you read the official reports into these events, a consistent pattern emerges. Senior leaders were told that their systems were robust and that risks were under control. Formal procedures and technical systems existed. KPIs and dashboards were regularly reported. Yet the real picture was very different from the one the Board and executives thought they were seeing.

For business leaders, directors, regulators and line managers, this raises a blunt question that cannot be avoided: How do you know you are protected today, not just in theory, but in practice?

Work commissioned by Safe Work Australia has highlighted what it described as the “monotonous commonality” of organisational factors in major accidents. These include cost-cutting and resource constraints, pressure arising from deadlines and production demands, weak change management, and a misplaced focus on paper compliance rather than real risk control.

Broader research on human and organisational factors in major accidents reinforces this picture. Critical failures almost always involve latent organisational weaknesses that line up with technical and human errors. There is rarely a single root cause. Instead there is a chain of seemingly minor gaps that eventually align in the worst possible way.

Confirmation bias plays a central role in this. In risk management, confirmation bias leads leaders to favour data that supports the “we are safe” narrative and to discount contradictory evidence. When things appear to be going well, it is psychologically easier to accept reassuring information and far more difficult to engage with bad news. Studies of decision-making under pressure show that urgency amplifies this bias. When deadlines loom or stakeholders are demanding progress, decision makers tend to lean on pre-existing assumptions rather than update their view on the basis of fresh data. People stop asking whether a control is still effective and instead assume that, because nothing has gone wrong yet, it is probably still fine.

In practice, what kills people is not usually the absence of systems. It is the untested belief that those systems are working.

This brings us to the way many organisations use risk matrices. It is useful to refer to the work of Professor Andrew Hopkins in his paper “A Note on What’s Wrong with Risk Matrices” (July 2025), which challenges the efficacy of this extremely common risk management tool and, by extension, the reliance by senior leaders and Boards on risk matrices to understand enterprise risk. There are at least two major issues. First, risk matrices tend to oversimplify complex risk profiles and can distort priorities by forcing them into arbitrary categories. Second, and more fundamentally, the information used to populate risk matrices is often out of date, incomplete or inferred from other sources rather than properly verified. A colour-coded risk matrix can therefore provide a false sense of control if it rests on stale or inaccurate data.

Most organisations do not lack systems or documents. They lack proof. Safety procedures, risk frameworks, incident reporting tools, maintenance systems, audit plans and training programs often operate side by side. At first glance this creates an impression of maturity. Leadership can point to binders of policies, certified management systems, well-presented dashboards and multiple specialised software platforms. The real problem lies in the gap between what everyone assumes is happening and what is actually happening at the frontline.

The quiet, routine tasks that keep people safe often sit outside formal systems. They live in spreadsheets, email trails, personal calendars, local checklists and whiteboards. They are easy to postpone and hard to see. When they are not completed, there is rarely a loud alarm. There is simply a small hole in the wall of protection that no one notices, at least not until something goes badly wrong.

If you strip everything back, a single question remains: Can you see, in a clear and defensible way, whether all the things that keep people alive and protect the organisation are being done, on time, by the right people, and to the right standard?

If the honest answer involves guesswork, instinct and manual data-chasing, then what you have is belief, not evidence. In the next blog in this series, we look at how these gaps play out in real-world disasters and what they reveal about leadership blind spots.

Cheers

Stu

48pxsquare.jpg