top of page

Predicting Human Error - Can We?

Updated: Jul 19

Investigating accidents and incidents serves one sole purpose; to identify the causes and prevent recurrence. Over time, this data can be analysed to identify common issues and precursors to events, which could be used to predict cases of human error in the future. It’s not quite as cut and dry as it sounds, but human behaviour is actually surprisingly predictable. The US military have used this concept to devise an AI programme called Raven Sentry, which was employed in Afghanistan back in 2020. It was able to quite accurately predict outbreaks of violence in cities, based on historical data compared against real-time information. You can read more about the project here, but ultimately it illustrates the point that with a bank of data (which for Raven Sentry even included the temperature) you can link together causal factors and predict subsequent human behaviour.

 

As already alluded to, in order to predict human error, the bank of historical data needs to be both robust and regularly updated. That is where the reporting culture of an organisation is crucial. Electronic systems are a good way of enabling reporting, as digital copies of reports can be kept indefinitely (complying with all data protection regulations of course) and it provides a platform for other programmes to integrate with for later analysis. Similarly, the investigation of causes and findings can be logged digitally, which can enable incident and cause linkages to be generated. Using websites or computer programmes for reporting can have its drawbacks. Access may not be present for everyone, and if reporting requires account creation for users then this may provide an additional barrier. Additionally online hosting of safety reporting can add the risk of cyber issues such as hacking or deletion of records. Nonetheless, in the modern tech world, using an online system to report incidents and conduct investigations would appear the most robust and time-saving for investigators.

 

Further to the system used, the culture within the organisation needs to be that of open reporting. Charles Haddon-Cave conducted a review of the Nimrod XV230 crash in Afghanistan in 2006. He published his findings in 2009, which ultimately led to the formation of the Military Aviation Authority (MAA). The organisation became a single point of authority for regulating military aviation. It provides all of the regulatory publications for the military and oversees the safety management processes. The idea of the MAA was to shift the culture of the military to be that of a just culture. It seeks to eliminate the finger-pointing, which was inherently present with incidents, as people were afraid to report near-misses with the fear of being punished. Near-miss reporting is almost as important as accident reporting, as it highlights what precautions are already in place to prevent accidents happening! A just culture is not a blameless one, however it acknowledges that mistakes happen and applies fair ‘attribution of blame’. If something was a genuine mistake or error, then why did it happen? If someone intentionally sabotaged a piece of equipment, then first why, before applying a suitable punishment. To be able to predict human behaviour, you need an understanding of the decision-making processes an organisation’s people have. Reporting near-misses, hazard observations and incidents/accidents openly, can capture the true root causes and build that picture for future prediction.

 

With a bank of data, the challenge comes to practitioners to subsequently pull any form of analysis of the data. Human factors are qualitative; they are reasons, words, causes of accidents, but converting that into quantitative data is a challenge. The easiest way of doing this is by counting the number of times a particular cause, say fatigue, is responsible for accidents. You could take this a stage further, by researching the correlation between hours slept the night before and mistakes made. Over time, the information would generate (potentially, depending on the statistical significance of the dataset) a predictability tool of below how many hours sleep an error is likely to occur. This can be combined across various causes, such as hours of experience on type in the past month (currency), but some causes may be limited to just the base level of analysis. For example, the number of times an incident was caused by a breakdown in communication.

 

There are several models which have been developed over the past 20 years that help investigators to categorise human factors and begin to build records. In 2000, Shappell and Wiegmann released their Human Factors Analysis and Classification System (HFACS) and this framework is still being used to this date. In more recent investigations, HFACS has been combined with more quantitative methods to better enable data analysis. For example, Bayesian Networks (BNs) add an aspect of probabilistic reasoning into the application of human factors data analysis, which has been applied into various studies (several of which are covered in Lam and Chan’s 2023 paper). Data analysis models such as HFACS-BN aid in providing a guiding framework for investigations to create meaningful predicating evidence.

 

So, is predicating human behaviour really like looking into a crystal ball? To an extent, we can predict human behaviour if we look across historical events. However, our analysis is only as accurate as the recency and depth of our database. The more extensive the analysis and bank of data we hold, the more accurately we can calculate when incidents will occur. There will always, however, be an element of human error that isn’t predictable. The aim of this is to control the controllable and mitigate against as many precursors to error as we can. But it would be irresponsible to think we can completely eliminate human error. After all, to err is human.

Comments


bottom of page