An empirical statistical law or (in popular terminology) a law of statistics represents a type of behaviour that has been found across a number of datasets and, indeed, across a range of types of data sets.[1] Many of these observances have been formulated and proved as statistical or probabilistic theorems and the term "law" has been carried over to these theorems. There are other statistical and probabilistic theorems that also have "law" as a part of their names that have not obviously derived from empirical observations. However, both types of "law" may be considered instances of a scientific law in the field of statistics. What distinguishes an empirical statistical law from a formal statistical theorem is the way these patterns simply appear in natural distributions, without a prior theoretical reasoning about the data.
There are several such popular "laws of statistics".
The Pareto principle is a popular example of such a "law". It states that roughly 80% of the effects come from 20% of the causes, and is thus also known as the 80/20 rule.[2] In business, the 80/20 rule says that 80% of your business comes from just 20% of your customers.[3] In software engineering, it is often said that 80% of the errors are caused by just 20% of the bugs.[4] 20% of the world creates roughly 80% of worldwide GDP.[5] 80% of healthcare expenses in the US are caused by 20% of the population.[6]
Zipf's law, described as an "empirical statistical law" of linguistics,[7] is another example. According to the "law", given some dataset of text, the frequency of a word is inversely proportional to its frequency rank. In other words, the second most common word should appear about half as often as the most common word, and the fifth most common world would appear about once every five times the most common word appears. However, what sets Zipf's law as an "empirical statistical law" rather than just a theorem of linguistics is that it applies to phenomena outside of its field, too. For example, a ranked list of US metropolitan populations also follow Zipf's law,[8] and even forgetting follows Zipf's law.[9] This act of summarizing several natural data patterns with simple rules is a defining characteristic of these "empirical statistical laws".
Examples of empirically inspired statistical laws that have a firm theoretical basis include:
Examples of "laws" with a weaker foundation include:
Examples of "laws" which are more general observations than having a theoretical background:
Examples of supposed "laws" which are incorrect include: