Average Harm Severity

By assigning a score to each level of severity in our incident tracking, we can better understand the culture of safety in our organization. Primarily, are we reporting enough near misses to account for incidents with more significant harm? In other words, taking an average score over time shows if there is a general bias towards only reporting exposure to unsafe conditions when someone is harmed.

Setting the Values

Using the estimate incidence rates from research (Frank E. Bird counts below) provides a good basis for assigning scores, as it fits the ratio we would like to see in reporting:

Severity Breakdown
SeverityCountScoreProduct
Death/Critical1100100
Serious/Major1060600
Moderate30
Minor3010300
Negligible1
Near Miss60000
Total6411000
Average Harm Severity = 1.56 = 1000/641

Given that people are often too strict in their interpretation of “near miss”, there is often an underreporting of opportunities for harm. We could and should (ultimately) target an average of 1 at the highest.

While the distributions may vary, such as situations where any incident is going to cause significant harm, there would still be a high ratio of near misses to be reported.

Examples

Severity Counts:Focus on SevereMixed ReportingHigh Reporting
Death/Critical111
Serious/Major10104
Moderate101016
Minor51064
Negligible510256
Near Miss2101024
Average Severity:32211

The key thing to notice is the lack of conflict of interest around reporting. If we report more near misses or low severity incidents, generally the easier cases to hide/ignore, our score improves. Better yet, the score improves drastically if we are able to prevent the more serious cases.