My stats guru colleague Dr Andrew Pratley and I are on the move to tackle Quantifornication, the plucking of numbers out of thin air. Last week was supposed to be our final blog we are co-writing but we couldn’t resist sneaking in another one after the great response from our last one.
Last week we wrote about the concept of directionality, taking what you know having applied the three-question framework and testing with more data to gain even more certainty of your decisions. We defined directionality to be a vector, meaning it has both direction and magnitude. We described how surveying one person, then ten then a larger group (each time getting the same result) means you can be more confident in the data. 50% from a group of 10 people has less strength than 50% from a group of 1000.
However, many of the questions we’re trying to answer or decisions we’re trying to make often don’t have easily accessible data sets. One approach is to ask experts to guess, but unfortunately, experts often aren’t that much better than the layperson.
Using the idea of directionality, we could approach things differently. Instead of trying to estimate what we’re interested in by guessing the value, we could measure a range of other variables that are causally related and use these to build a simple model based on objective data.
This is not a new idea, psychology has had to develop approaches to this. We can’t actually measure the IQ of someone, what we do is measure specific attributes and use these to make an estimate. Those attributes happen to be language, numeracy and spatial patterns, hence why these feature heavily in IQ tests.
Returning to risk, let’s take the example of a catastrophic explosion at an oil refinery. We know these happen, and that the likelihood across the industry is low. Experts might make estimates based on the age of the refinery and by looking around at the systems and staff.
Could we do better? Using the concept of directionality we could begin to measure what we believe has a direct causal link to the likelihood of a catastrophic explosion. We could measure the amount spent on maintenance versus the required amount and look at this over the preceding years. We could look at the shift patterns and know that longer shifts result in more operator errors. We could look at the handover procedures and tagging system. We could measure the attitude of senior management towards hitting production targets as compared to safety.
We know that as these increase (the gap between the money required and spent, the length of shifts and the desire to hit production targets) they all contribute to increasing the likelihood of a catastrophic explosion. We could also find variables that as they increase, decrease the likelihood. This might include training and the experience of the operators.
The use of expert judgement is to weigh which of these variables has the most impact, instead of guessing the values. By developing this approach you can directly link the model to the control measures. Building a model with a number of verifiable inputs is more likely to give an accurate estimate than using a small number of experts to guess what they think will happen.
Stay safe and adapt – with better measurement!