By assigning a score to each level of severity in our incident tracking, we can better understand the culture of safety in our organization. Primarily, are we reporting enough near misses to account for incidents with more significant harm? In other words, taking an average score over time shows if there is a general bias towards only reporting exposure to unsafe conditions when someone is harmed.
Setting the Values
Using the estimate incidence rates from research (Frank E. Bird counts below) provides a good basis for assigning scores, as it fits the ratio we would like to see in reporting:
Severity Breakdown | |||
---|---|---|---|
Severity | Count | Score | Product |
Death/Critical | 1 | 100 | 100 |
Serious/Major | 10 | 60 | 600 |
Moderate | 30 | ||
Minor | 30 | 10 | 300 |
Negligible | 1 | ||
Near Miss | 600 | 0 | 0 |
Total | 641 | 1000 | |
Given that people are often too strict in their interpretation of “near miss”, there is often an underreporting of opportunities for harm. We could and should (ultimately) target an average of 1 at the highest.
While the distributions may vary, such as situations where any incident is going to cause significant harm, there would still be a high ratio of near misses to be reported.
Examples
Severity Counts: | Focus on Severe | Mixed Reporting | High Reporting |
---|---|---|---|
Death/Critical | 1 | 1 | 1 |
Serious/Major | 10 | 10 | 4 |
Moderate | 10 | 10 | 16 |
Minor | 5 | 10 | 64 |
Negligible | 5 | 10 | 256 |
Near Miss | 2 | 10 | 1024 |
Average Severity: | 32 | 21 | 1 |
The key thing to notice is the lack of conflict of interest around reporting. If we report more near misses or low severity incidents, generally the easier cases to hide/ignore, our score improves. Better yet, the score improves drastically if we are able to prevent the more serious cases.