A risk matrix is a matrix that is used during risk assessment to define the level of risk by considering the category of likelihood (often confused with one of its possible quantitative metrics, i.e. the probability) against the category of consequence severity. This is a simple mechanism to increase visibility of risks and assist management decision making.[1]
Risk is the lack of certainty about the outcome of making a particular choice. Statistically, the level of downside risk can be calculated as the product of the probability that harm occurs (e.g., that an accident happens) multiplied by the severity of that harm (i.e., the average amount of harm or more conservatively the maximum credible amount of harm). In practice, the risk matrix is a useful approach where either the probability or the harm severity cannot be estimated with accuracy and precision.
Although standard risk matrices exist in certain contexts (e.g. US DoD, NASA, ISO),[2] [3] [4] individual projects and organizations may need to create their own or tailor an existing risk matrix. For example, the harm severity can be categorized as:
The likelihood of harm occurring might be categorized as 'certain', 'likely', 'possible', 'unlikely' and 'rare'. However it must be considered that very low likelihood may not be very reliable.
The resulting risk matrix could be:
The company or organization then would calculate what levels of risk they can take with different events. This would be done by weighing the risk of an event occurring against the cost to implement safety and the benefit gained from it.
The following is an example matrix of possible personal injuries, with particular accidents allocated to appropriate cells within the matrix:
Negligible | Marginal | Critical | Catastrophic | ||
---|---|---|---|---|---|
Certain | Stubbing toe | ||||
Likely | Fall | ||||
Possible | Major car accident | ||||
Unlikely | Aircraft crash | ||||
Rare | Major tsunami |
On January 30 1978,[6] a new version of US Department of Defense Instruction 6055.1 ("Department of Defense Occupational Safety and Health Program") was released. It is said to have been an important step towards the development of the risk matrix.[7]
In August 1978, business textbook author David E Hussey defined an investment "risk matrix" with risk on one axis, and profitability on the other. The values on the risk axis were determined by first determining risk impact and risk probability values in a manner identical to completing a 7 x 7 version of the modern risk matrix.[8]
A 5 x 4 version of the risk matrix was defined by the US Department of Defense on March 30 1984, in "MIL-STD-882B System Safety Program Requirements".[9] [10]
The risk matrix was in use by the acquisition reengineering team at the US Air Force Electronic Systems Center in 1995.[11]
Huihui Ni, An Chen and Ning Chen proposed some refinements of the approach in 2010.[12]
In 2019, the three most popular forms of the matrix were:
Other standards are also in use.[14]
In his article 'What's Wrong with Risk Matrices?',[15] Tony Cox argues that risk matrices experience several problematic mathematical features making it harder to assess risks. These are:
Thomas, Bratvold, and Bickel[16] demonstrate that risk matrices produce arbitrary risk rankings. Rankings depend upon the design of the risk matrix itself, such as how large the bins are and whether or not one uses an increasing or decreasing scale. In other words, changing the scale can change the answer.
An additional problem is the imprecision used on the categories of likelihood. For example; 'certain', 'likely', 'possible', 'unlikely' and 'rare' are not hierarchically related. A better choice might be obtained through use of the same base term, such as 'extremely common', 'very common', 'fairly common', 'less common', 'very uncommon', 'extremely uncommon' or a similar hierarchy on a base "frequency" term.
Another common problem is to assign rank indices to the matrix axes and multiply the indices to get a "risk score". While this seems intuitive, it results in an uneven distribution.
Douglas W. Hubbard and Richard Seiersen take the general research from Cox, Thomas, Bratvold, and Bickel, and provide specific discussion in the realm of cybersecurity risk. They point out that since 61% of cybersecurity professionals use some form of risk matrix, this can be a serious problem. Hubbard and Seiersen consider these problems in the context of other measured human errors and conclude that "The errors of the experts are simply further exacerbated by the additional errors introduced by the scales and matrices themselves. We agree with the solution proposed by Thomas et al. There is no need for cybersecurity (or other areas of risk analysis that also use risk matrices) to reinvent well-established quantitative methods used in many equally complex problems."[17]