Belief vs Reality.

When you examine the official inquiries into major disasters, a disturbingly consistent pattern emerges. Management believed they had robust systems. Reports suggested that risks were under control. Frontline concerns were softened or filtered as information travelled upwards. Fragmented systems and siloed responsibilities hid the real exposure. The result was the same in case after case: a catastrophic gap between belief and reality.

Across sectors, inquiries into major accidents tell a similar story about leadership blind spots. Senior leaders are routinely presented with assurances that systems are robust, dashboards that show predominantly green indicators, audit reports couched in procedural language, and risk matrices that appear to show controlled exposure. Meanwhile, frontline conditions are far more fragile than anyone is prepared to admit. Line managers, supervisors and middle management often “manage information upwards”, whether consciously or unconsciously, to protect local reputations and avoid conflict. Over time, the organisation becomes very good at telling itself that it is safe.

titanic-bell-4x5.jpg

The sinking of the Titanic is an early and well-known example. Both the British and United States inquiries found that the ship complied with many of the standards of the time. Lifeboat numbers met outdated rules, and the ship was widely promoted as “unsinkable”. The captain and officers pressed on at speed through an ice field despite multiple ice warnings, underpinned by overconfidence in the ship’s design and a belief that they were adequately protected. On paper, the safety arrangements were acceptable. In practice, they were fatally inadequate.

The Space Shuttle Challenger disaster provides another clear example. The Rogers Commission described the accident as “an accident rooted in history”. NASA and its main contractor had known about the vulnerability of the booster O-rings for years, and there had been repeated anomalies on earlier flights. Instead of triggering a redesign, the organisation slowly normalised these deviations.

On the day of launch, engineers raised serious concerns about low temperatures and seal performance. Management overruled them, influenced by schedule pressure, political expectations and a history of “getting away with it”. Leaders believed that the process and technical margins would protect them. Their belief was not based on sound evidence; it was shaped by confirmation bias and an organisational culture that struggled to hear bad news.

challenger-tragedy-4x5.jpg
oil-rig-4x5.jpg

Deepwater Horizon followed a similar pattern in a different industry. Investigations in the United States identified a string of management failures, poor risk assessment, misinterpretation of key test results and weak decision-making. There was no single technical root cause. Rather, multiple small decisions, inadequate controls and overconfidence in the robustness of the system combined to create catastrophe. Management relied heavily on paperwork and reports that presented a controlled picture, while missed warning signs and flawed tests were either misunderstood or minimised.

At Fukushima, investigations found that the plant operator and regulators had known of the potential tsunami risk for years. Internal analyses pointed to the possibility of far higher waves than the plant was originally designed to withstand, yet these findings did not drive timely action. Reports were softened and there was hesitation to invest in major upgrades. Formal compliance with older standards created a false sense of security. When the tsunami arrived, the gap between belief and reality became brutally clear.

Fukushima-Unit_3_after-explosion.jpg
grenfell-tower-fire-Natalie Oxford.jpg

The Grenfell Tower fire in London again revealed systemic failure rather than a single “bad actor.” Over time, combustible cladding and insulation were installed, fire doors and other protections deteriorated, and residents’ concerns were not taken seriously. The landlord and regulators had no integrated view of how all these changes and omissions had eroded the tower’s fire defences. Official paperwork and compliance labels existed. The actual risk was far higher than anyone at senior levels was prepared to acknowledge.

The loss of the Titan submersible is a recent and stark example of extreme overconfidence at the edge of technology. The operator consciously bypassed normal certification pathways, dismissed repeated warnings from experts about design and materials, and used past successful dives as “proof” that the vessel was safe. A culture of innovation was cultivated that treated conventional safety thinking as an obstacle. When the craft failed, it did so without warning, leaving no room for recovery.

titan-submersiblewreckage.png

It would be convenient to dismiss these events as rare, high-profile anomalies that only affect specialised or extreme environments. That would be a serious mistake. The same underlying factors contribute to Serious Injuries or Fatalities in enterprises of any size. According to Safe Work Australia, there were 188 workplace fatalities in 2024 and over 100,000 serious workers’ compensation claims. The fatality rate has not meaningfully improved in recent years. Many of these serious incidents occurred in small and medium enterprises or in workplaces that would not normally be classified as “high risk”.

Any death in the workplace is one too many. Behind each statistic, the pattern is familiar. Critical tasks were assumed to be complete but were not verified. Early warning signs were noted locally but not escalated. Fragmented systems hid cross-cutting exposure. Leaders were reassured by polished reporting that did not show the full picture.

The real lesson for leaders is not that more paperwork or another system will save them. The lesson is that belief is not enough. If you sit on a Board, lead a business unit, manage a site or regulate a high-risk sector, you cannot afford to rely purely on trust, narrative and appearances. The question you need to take back into your organisation is simple:

Where are we relying on belief, and where do we have live, objective evidence that our critical controls are actually working?

In the next blog in this series, we examine why “adding another system” usually makes the problem worse, and how a connected, evidence-based view of control can shift your organisation from “we think” to “we know”.

Cheers

Stu

48pxsquare.jpg