Philip E. Tetlock | |
Birth Place: | Toronto, Ontario, Canada |
Work Institution: | University of Pennsylvania University of California, Berkeley Ohio State University |
Alma Mater: | University of British Columbia (BA) Yale University (PhD) |
Fields: | political forecasting, political psychology, forecasting, decision making |
Philip E. Tetlock (born 1954) is a Canadian-American political science writer, and is currently the Annenberg University Professor at the University of Pennsylvania, where he is cross-appointed at the Wharton School and the School of Arts and Sciences. He was elected a Member of the American Philosophical Society in 2019.
He has written several non-fiction books at the intersection of psychology, political science and organizational behavior, including ; Expert Political Judgment: How Good Is It? How Can We Know?; Unmaking the West: What-if Scenarios that Rewrite World History; and Counterfactual Thought Experiments in World Politics. Tetlock is also co-principal investigator of The Good Judgment Project, a multi-year study of the feasibility of improving the accuracy of probability judgments of high-stakes, real-world events.
Tetlock was born in 1954 in Toronto, Canada and completed his undergraduate work at the University of British Columbia and doctoral work at Yale University, obtaining his PhD in 1979.
He has served on the faculty of the University of California, Berkeley (1979–1995, assistant professor), the Ohio State University (the Burtt Endowed Chair in Psychology and Political Science, 1996–2001) and again at the University of California Berkeley (the Mitchell Endowed Chair at the Haas School of Business, 2002–2010). Since 2011, he has been the Annenberg University Professor at the University of Pennsylvania.
Tetlock has received awards from scientific societies and foundations, including the American Psychological Association, American Political Science Association, American Association for the Advancement of Science, International Society of Political Psychology, American Academy of Arts and Sciences, the National Academy of Sciences and the MacArthur, Sage, Grawemeyer, and Carnegie Foundations.
He has published over 200 articles in peer-reviewed journals and has edited or written ten books.[1]
Tetlock's research program over the last four decades has explored five themes:
See main article: The Good Judgment Project. In his early work on good judgment, summarized in Expert Political Judgment: How Good Is It? How Can We Know?,[2] Tetlock conducted a set of small scale forecasting tournaments between 1984 and 2003. The forecasters were 284 experts from a variety of fields, including government officials, professors, journalists, and others, with many opinions, from Marxists to free-marketeers.
The tournaments solicited roughly 28,000 predictions about the future and found the forecasters were often only slightly more accurate than chance, and usually worse than basic extrapolation algorithms, especially on longer–range forecasts three to five years out. Forecasters with the biggest news media profiles were also especially bad. This work suggests that there is an inverse relationship between fame and accuracy.
As a result of this work, he received the 2008 University of Louisville Grawemeyer Award for Ideas Improving World Order, as well as the 2006 Woodrow Wilson Award for best book published on government, politics, or international affairs and the Robert E. Lane Award for best book in political psychology, both from the American Political Science Association in 2005. The expert political judgment project also compared the accuracy track records of "foxes" and "hedgehogs" (two personality types identified in Isaiah Berlin's 1950 essay "The Hedgehog and the Fox"). "Hedgehogs" performed less well, especially on long-term forecasts within the domain of their expertise.
These findings were reported widely in the media and came to the attention of Intelligence Advanced Research Projects Activity (IARPA) inside the United States intelligence community—a fact that was partly responsible for the 2011 launch of a four-year geopolitical forecasting tournament that engaged tens of thousands of forecasters and drew over one million forecasts across roughly 500 questions of relevance to U.S. national security, broadly defined.
Since 2011, Tetlock and his wife/research partner Barbara Mellers have been co-leaders of the Good Judgment Project (GJP), a research collaborative that emerged as the winner of the IARPA tournament.[3] The original aim of the tournament was to improve geo-political and geo-economic forecasting. Illustrative questions include "What is the chance that a member will withdraw from the European Union by a target date?" or "What is the likelihood of naval clashes claiming over 10 lives in the East China Sea?" or "How likely is the head of state of Venezuela to resign by a target date?" The tournament challenged GJP and its competitors at other academic institutions to come up with innovative methods of recruiting gifted forecasters, methods of training forecasters in basic principles of probabilistic reasoning, methods of forming teams that are more than the sum of their individual parts and methods of developing aggregation algorithms that most effectively distill the wisdom of the crowd.[4] [5] [6] [7]
Among the more surprising findings from the tournament were:
These and other findings are laid out in particularly accessible form in the Tetlock and Gardner (2015) book on "Superforecasting." The book also profiles several "superforecasters." The authors stress that good forecasting does not require powerful computers or arcane methods. It involves gathering evidence from a variety of sources, thinking probabilistically, working in teams, keeping score, and being willing to admit error and change course.
There is a tension, if not contradiction, between the positions taken in the Good Judgment Project and those that Tetlock took in his earlier book Expert Political Judgment: How Good Is It? How Can We Know? (2005). The more pessimistic tone of Expert Political Judgment (2005) and optimistic tone of Superforecasting (2015) reflects less a shift in Tetlock's views on the feasibility of forecasting than it does the different sources of data in the two projects. The Superforecasting book focused on shorter-range forecasts, the longest of which, about 12 months, being only as long as the shortest forecasts in the Expert Political Judgment project. Tetlock and Gardner (2015) also suggest that the public accountability of participants in the later IARPA tournament boosted performance. Apparently, "even the most opinionated hedgehogs become more circumspect" when they feel their accuracy will soon be compared to that of ideological rivals.
Tetlock and Mellers[10] see forecasting tournaments as a possible mechanism for helping intelligence agencies escape from blame-game (or accountability) ping-pong in which agencies find themselves whipsawed between clashing critiques that they were either too slow to issue warnings (false negatives such as 9/11) and too fast to issue warnings (false positives). They argue that tournaments are ways of signaling that an organization is committed to playing a pure accuracy game – and generating probability estimates that are as accurate as possible (and not tilting estimates to avoid the most recent "mistake").[11]
Tetlock is also President and Chief Scientist of Forecasting Research Institute, which organized, among other things "the Existential Risk Persuasion Tournament" that involved 169 experts recording probability judgements on existential risks between June and October 2022. They asked 80 subject matter experts and 89 "superforecasters" to estimate probabilities for various events by 2030, 2050, and 2100 that might involve either a "catastrophe" (leading to the deaths of at least 10 percent of humanity) or "human extinction" (where the human population would drop below 1,000). Overall, their superforecasters gave a median estimate of 9.05 percent for a catastrophe from whatever source by 2100 while the median according to the experts was 20 percent, with 95 percent confidence intervals of [6.13, 10.25] and [15.44, 27.60] percent for superforecasters and experts, respectively.
In a 1985 essay, Tetlock proposed that accountability is a key concept for linking the individual levels of analysis to the social-system levels of analysis.[12] Accountability binds people to collectivities by specifying who must answer to whom, for what, and under what ground rules.[13] In his earlier work in this area, he showed that some forms of accountability can make humans more thoughtful and constructively self-critical (reducing the likelihood of biases or errors), whereas other forms of accountability can make us more rigid and defensive (mobilizing mental effort to defend previous positions and to criticize critics).[14] In a 2009 essay, Tetlock argues that much is still unknown about how psychologically deep the effects of accountability run—for instance, whether it is or is not possible to check automatic or implicit association-based biases,[15] a topic with legal implications for companies in employment discrimination class actions.[16]
In addition to his work on the bias-attenuating versus bias-amplifying effects of accountability, Tetlock has explored the political dimensions of accountability. When, for instance, do liberals and conservatives diverge in the preferences for "process accountability" that holds people responsible for respecting rules versus "outcome accountability" that holds people accountable for bottom-line results?[17] [18] Tetlock uses the phrase "intuitive politician research program" to describe this line of work.[19]
Tetlock uses a different "functionalist metaphor" to describe his work on how people react to threats to sacred values—and how they take pains to structure situations so as to avoid open or transparent trade-offs involving sacred values.[20] [21] [22] [23] Real-world implications of this claim are explored largely in business-school journals such as the Journal of Consumer Research, California Management Review, and Journal of Consumer Psychology. This research argues that most people recoil from the specter of relativism: the notion that the deepest moral-political values are arbitrary inventions of mere mortals desperately trying to infuse moral meaning into an otherwise meaningless universe.[24] [25] [26] [27] Rather, humans prefer to believe that they have sacred values that provide firm foundations for their moral-political opinions. People can become very punitive "intuitive prosecutors" when they feel sacred values have been seriously violated, going well beyond the range of socially acceptable forms of punishment when given chances to do so covertly.[28]
Tetlock has a long-standing interest in the tensions between political and politicized psychology. He argues that most political psychologists tacitly assume that, relative to political science, psychology is the more basic discipline in their hybrid field.[29] [30] In this view, political actors—be they voters or national leaders—are human beings whose behavior should be subject to fundamental psychological laws that cut across cultures and historical periods. Although he too occasionally adopts this reductionist view of political psychology in his work, he has also raised the contrarian possibility in numerous articles and chapters that reductionism sometimes runs in reverse—and that psychological research is often driven by ideological agenda (of which the psychologists often seem to be only partly conscious). Tetlock has advanced variants of this argument in articles on the links between cognitive styles and ideology (the fine line between rigid and principled)[31] [32] as well as on the challenges of assessing value-charged concepts like symbolic racism[33] and unconscious bias (is it possible to be a "Bayesian bigot"?).[34] [35] [36] [37] Tetlock has also co-authored papers on the value of ideological diversity in psychological and social science research.[38] [39] One consequence of the lack of ideological diversity in high-stakes, soft-science fields is frequent failures of what Tetlock calls turnabout tests.[40] [41] [42]
In collaboration with Greg Mitchell and Linda Skitka, Tetlock has conducted research on hypothetical societies and intuitions about justice ("experimental political philosophy"). The spotlight here is on a fundamental question in political theory: who should get what from whom, when, how, and why? In real-world debates over distributive justice, however, Tetlock argues it is virtually impossible to disentangle the factual assumptions that people are making about human beings from the value judgments people are making about end-state goals, such as equality and efficiency.[43] [44] [45] [46] [47] Hypothetical society studies make it possible for social scientists to disentangle these otherwise hopelessly confounded influences on public policy preferences.