In decision theory, the sure-thing principle states that a decision maker who decided they would take a certain action in the case that event E has occurred, as well as in the case that the negation of E has occurred, should also take that same action if they know nothing about E.
The principle was coined by L.J. Savage:[1]
Savage formulated the principle as a dominance principle, but it can also be framed probabilistically.[2] Richard Jeffrey and later Judea Pearl[3] showed that Savage's principle is only valid when the probability of the event considered (e.g., the winner of the election) is unaffected by the action (buying the property). Under such conditions, the sure-thing principle is a theorem in the do-calculus (see Bayes networks). Blyth[4] constructed a counterexample to the sure-thing principle using sequential sampling in the context of Simpson's paradox, but this example violates the required action-independence provision.[5]
In the above cited paragraph, Savage illustrated the principle in terms of knowledge. However the formal definition of the principle, known as P2, does not involve knowledge because, in Savage's words, "it would introducenew undefined technical terms referring to knowledge and possibility that would render it mathematically useless without still morepostulates governing these terms." Samet[6] provided a formal definition of the principle in terms of knowledge and showed that the impossibility to agree to disagree is a generalization of the sure-thing principle. It is similarly targeted by the Ellsberg and Allais paradoxes, in which actual people's choices seem to violate this principle.