Birth Name: | Paul Everett Swedal |
Birth Date: | 3 January 1920 |
Birth Place: | Minneapolis, Minnesota, U.S. |
Death Place: | Minneapolis, Minnesota, U.S. |
Field: | Psychology, philosophy of science |
Work Institution: | University of Minnesota |
Alma Mater: | University of Minnesota (BA, PhD) |
Doctoral Advisor: | Starke R. Hathaway |
Doctoral Students: | Harrison G. Gough, Dante Cicchetti, Donald R. Peterson, George Schlager Welsh |
Known For: | Minnesota Multiphasic Personality Inventory, genetics of schizophrenia, construct validity, clinical v. statistical prediction, philosophy of science, taxometrics |
Prizes: | National Academy of Sciences (1987), APA Award for Lifetime Contributions to Psychology (1996), James McKeen Cattell Fellow Award (1998), Bruno Klopfer Award (1979) |
Paul Everett Meehl (3 January 1920 – 14 February 2003) was an American clinical psychologist. He was the Hathaway and Regents' Professor of Psychology at the University of Minnesota, and past president of the American Psychological Association.[1] A Review of General Psychology survey, published in 2002, ranked Meehl as the 74th most cited psychologist of the 20th century, in a tie with Eleanor J. Gibson.[2] Throughout his nearly 60-year career, Meehl made seminal contributions to psychology, including empirical studies and theoretical accounts of construct validity, schizophrenia etiology, psychological assessment, behavioral prediction, metascience, and philosophy of science.
Paul Meehl was born January 3, 1920, in Minneapolis, Minnesota, to Otto and Blanche Swedal. His family name "Meehl" was his stepfather's. When he was age 16, his mother died as the result of poor medical care which, according to Meehl, greatly affected his faith in the expertise of medical practitioners and diagnostic accuracy of clinicians. After his mother's death, Meehl lived briefly with his stepfather, then with a neighborhood family for one year so he could finish high school. He then lived with his maternal grandparents, who lived near the University of Minnesota.
Meehl started as an undergraduate at the University of Minnesota in March 1938. He earned his bachelor's degree in 1941[3] with Donald G. Paterson as his advisor, and took his PhD in psychology at Minnesota under Starke R. Hathaway in 1945. Meehl's graduate student cohort at the time included Marian Breland Bailey, William K. Estes, Norman Guttman, William Schofield, and Kenneth MacCorquodale. Upon taking his doctorate, Meehl immediately accepted a faculty position at the university, which he held throughout his career. In addition, he had appointments in psychology, law, psychiatry, neurology, philosophy, and served as a fellow of the Minnesota Center for Philosophy of Science, founded by Herbert Feigl, Meehl, and Wilfrid Sellars.
Meehl rose quickly to academic positions of prominence. He was chairman of the University of Minnesota Psychology Department at age 31, president of the Midwestern Psychological Association at age 34, recipient of the American Psychological Association's Award for Distinguished Scientific Contributions to Psychology at age 38, and president of that association at age 42. He was promoted to Regents' professor, the highest academic position at the University of Minnesota, in 1968. He received the Bruno Klopfer Distinguished Contributor Award in personality assessment in 1979, and was elected to the National Academy of Sciences in 1987.
Meehl was not particularly religious during his upbringing, but in adulthood during the 1950s collaborated with a group of Lutheran theologians and psychologists to write What, Then, Is Man?. This project was commissioned by the Lutheran Church–Missouri Synod through Concordia Seminary. The project explored both orthodox theology, psychological science, and how Christians (Lutherans, in particular) could responsibly function as both Christians and psychologists without betraying orthodoxy or sound science and practice.
In 1995, Meehl was a signatory of a collective statement titled "Mainstream Science on Intelligence", written by Linda Gottfredson and published in the Wall Street Journal.[4] He died on February 14, 2003, at his home in Minneapolis of chronic myelomonocytic leukemia. In 2005, Donald R. Peterson, a student of Meehl's, published a volume of their correspondence.
Meehl founded, along with Herbert Feigl and Wilfrid Sellars, the Minnesota Center for the Philosophy of Science, and was a leading figure in philosophy of science as applied to psychology.
Arguably Meehl's most important contributions to psychological research methodology were in legitimizing scientific claims about unobservable psychological processes. In the first half of the 20th century, psychology was dominated by operationism and behaviorism. As outlined in Bridgman's The Logic of Modern Physics, if two researchers had different operational definitions, they had different concepts. There was no "surplus meaning". If, for example, two researchers had different measures of "anomia" or "intelligence", they had different concepts. Behaviorists focussed on stimulus–response theories and were deeply skeptical of "unscientific" explanations in terms of unobservable psychological processes. Behaviorists and operationists would have rejected as unscientific any notion that there was some general thing called "intelligence" that existed inside a person's head and that might be reflected almost-equivalently in Stanford-Binet IQ tests or Wechsler scales. Meehl changed that via two landmark papers.
In 1948, Kenneth MacCorquodale and Meehl introduced the distinction between "hypothetical construct" and "intervening variable". "Naively, it would seem that there is a difference in logical status between constructs which involve the hypothesization of an entity, process, or event which is not itself observed, and constructs which do not involve such hypothesization." An intervening variable is simply a mathematical combination of operations. If one speaks of the "expected value" of a gamble (probability of winning × payoff for winning), this is not hypothesizing any unobservable psychological process. Expected value is simply a mathematical combination of observables. On the other hand, if one attempts to make statements about "attractiveness" of a gamble, if this is not observable or perfectly captured by some single operational measure, this is a "hypothetical construct"—a theoretical term that is not itself observable or a direct function of observables. They used as examples Hull's (anticipatory goal response) or Allport's "biophysical traits", or Murray's "needs". "These constructs involve terms which are not wholly reducible to empirical terms; they refer to processes or entities that are not directly observed (although they need not be in principle unobservable)." Such constructs had "surplus meaning". Thus, good behaviorists and operationists should be comfortable with statements about intervening variables, but should have greater wariness of hypothetical constructs.
In 1955, Lee J. Cronbach and Meehl legitimized theory tests about unobservable, hypothetical constructs. Constructs are unobservables, and they can be stable traits of individuals (e.g., "Need for Cognition") or temporary states (e.g., nonconscious goal activation). Previously, good behaviorists had deep skepticism about the legitimacy of psychological research about unobservable processes. Cronbach and Meehl introduced the concept of "construct" validity for cases in which there was no "gold standard" criterion for validating a test of a hypothetical construct. Hence, any construct had "surplus meaning". Construct validity was distinguished from predictive validity, concurrent validity, and content validity. They also introduced the concept of the "nomological net"—the network of associations among constructs and measures. Cronbach and Meehl argued that the meaning of a hypothetical construct is given by its relations to other variables in a nomological network. One tests a theory of relations among hypothetical constructs by showing that putative measures of these constructs relate to each other as implied by one's theory, as captured in the nomological network. This set the stage for modern psychological test and set the stage for the cognitive revolution in psychology that focusses on the study of mental processes that are not directly observable.
After Karl Popper's The Logic of Scientific Discovery was published in English in 1959, Meehl counted himself a "Popperian" for a short time, later as "a 'neo-Popperian' philosophical eclectic", still using the Popperian approach of conjectures and refutations, but without endorsing all of Popper's philosophy. Influenced by and in respect of Popper's asymmetry principle, Meehl was a strident critic of using statistical null hypothesis testing for the evaluation of scientific theory. He believed that null hypothesis testing was partly responsible for the lack of progress in many of the "scientifically soft" areas of psychology (e.g. clinical, counseling, social, personality, and community). At the same time, although Meehl harshly criticized the overreliance of psychology on NHST, he also noted, “When I was arat psychologist, I unabashedly employed significancetesting in latent-learning experiments; looking back Isee no reason to fault myself for having done so in thelight of my present methodological views”.[5] He mainly promoted a switch to interval hypothesis testing.
Meehl's paradox is that in the hard sciences more sophisticated and precise methods make it harder to claim support for one's theory. The opposite is true in soft sciences like the social sciences. Hard sciences like physics make exact point predictions and work by testing whether observed data falsify those predictions. With increased precision, one is better able to detect small deviations from the model's predictions and harder to claim support for the model. In contrast, softer social sciences make only directional predictions, not point predictions. Softer social sciences claim support when the direction of the observed effect matches predictions, rejecting only the null hypothesis of zero effect. Meehl argued that no treatment in the real world has zero effect. With sufficient sample size, therefore, one should almost always be able to reject the null hypothesis of zero effect. Researchers who guessed randomly at the sign of any small effect would have a 50–50 chance of finding confirmation with sufficiently large sample size.
Meehl was considered an authority on the development of psychological assessments using the Minnesota Multiphasic Personality Inventory (MMPI). While Meehl did not directly develop the original MMPI items (he was a high school junior when Hathaway and McKinley created the item pool), he contributed widely to the literature on interpreting patterns of responses to MMPI questions.[6] In particular, Meehl argued that the MMPI could be used to understand personality profiles systematically associated with clinical outcomes, something he termed a statistical (versus a "clinical") approach to predicting behavior.
As part of his doctoral dissertation, Meehl worked with Hathaway to develop the K scale indicator of valid responding for the MMPI. During initial clinical testing of the MMPI, a subset of individuals exhibiting clear signs of mental illness still produced normal personality profiles on the various clinical scales.[7] It was suspected that these individuals were demonstrating clinical defensiveness and presenting as asymptomatic and well-adjusted. Meehl and Hathaway employed a technique called "empirical criterion keying" to compare the responses of these defensive individuals with other individuals who were not suspected of experiencing mental illness and who also produced normal MMPI profiles. The empirical criterion keying approach selected items based on their ability to maximally discriminate between these groups. They were not selected based on theory or face validity of the item content. As a result, items on the resulting scale, termed the K (for "correction") scale would be difficult to avoid for individuals attempting to present as well-adjusted when taking the MMPI. Individuals who endorsed the K scale items were thought to be demonstrating a sophisticated attempt to conceal information about their mental health history from test administrators. The K scale is an early example of a putative suppressor variable.
The K scale is used as a complement validity indicator to the L (for "lie") scale, whose items were selected based on item content face validity and are more obviously focused on impression management. The K scale has been popular among clinical psychologists, and has been a useful tool for MMPI and MMPI-2 profile interpretation. Meehl and Hathaway continued to conduct research using MMPI validity indicators and noticed K scales elevations were associated with greater denial of symptoms on some clinical scales more than others. To compensate for this, they developed a K scale correction factor aimed at offsetting effects of defensive responding on other scales measuring psychopathology. Substantial subsequent research conducted on the original MMPI clinical scales used these "K-corrected" scores, although research on the usefulness of the corrections has produced mixed results.[8] [9] The most recent iteration of the K scale, developed for the MMPI-2-RF, is still used for psychological assessments in clinical, neuropsychological, and forensic contexts.[10]
Meehl's 1954 book Clinical vs. Statistical Prediction: A Theoretical Analysis and a Review of the Evidence analyzed the claim that mechanical (i.e., formal, algorithmic, actuarial) methods of data combination would outperform clinical (i.e., subjective, informal) methods to predict behavior. Meehl argued that mechanical methods of prediction, when used correctly, make more efficient and reliable decisions about patient prognosis and treatment. His conclusions were controversial and have long conflicted with the prevailing consensus about psychiatric decision-making.
Historically, mental health professionals commonly make decisions based on their professional clinical judgment (i.e., combining clinical information "in their head" and arriving at a prediction about a patient).[11] Meehl theorized that clinicians would make more mistakes than a mechanical prediction tool created to combine clinical data and arrive at predictions. Within his view, mechanical prediction approaches need not exclude any type of data from being combined and could incorporate coded clinical impressions. Once the clinical information is quantified, Meehl proposed mechanical approaches would make 100% reliable predictions for exactly the same data every time. Clinical prediction, on the other hand, would not provide this guarantee.
Meta-analyses comparing clinical and mechanical prediction efficiency have supported Meehl's (1954) conclusion that mechanical methods outperform clinical methods.[12] [13] In response to objections, Meehl continued to defend algorithmic prediction throughout his career and proposed that clinicians should rarely deviate from mechanically derived conclusions. To illustrate this, Meehl described a "broken leg" scenario in which mechanical prediction indicated that an individual has a 90% chance of attending the movies. However, the "clinician" is aware that the individual recently broke his leg, and this was not factored into the mechanical prediction. Therefore, the clinician can confidently conclude the mechanical prediction will be incorrect. The broken leg is objective evidence determined with high accuracy and highly correlated with staying home from the movies. Meehl argued, however, that mental health professionals rarely have access to such clear countervailing information as a broken leg, and therefore rarely if ever can appropriately disregard valid mechanical predictions.
Meehl argued that humans introduce biases when making decisions during clinical practice.[14] For example, clinicians may seek out information to support their presuppositions, or miss and ignore information challenging their views. Additionally, Meehl described how clinical judgment could be influenced by overconfidence or anecdotal observations unsupported by empirical research. In contrast, mechanical prediction tools can be configured to use important clinical information and are not influenced by psychological biases. In support of this conclusion, Meehl and his colleagues found that clinicians still make less accurate decisions than mechanical formulas even when given the same mechanical formulas to help with their decision-making.[14] Human biases have become central to research in diverse fields including behavioral economics and decision-making.
Kahneman and Klein (2009) reported that expert intuition is learned from frequent, rapid, high quality feedback. Few professions have such feedback and can be beaten by mechanical rules, as Meehl and others have documented. Kahneman et al. (2021) noted that professionals without such feedback can be beaten by rules averaging several known predictors. With some data, linear regression models work better. With lots of data artificial intelligence models can work better still.
Tetlock and Gardner (2015) and Hubbard (2020) have developed methods to help people improve their judgements, citing Meehl's work as a foundation for their own.
Meehl was elected president of the American Psychological Association in 1962. In his address to the annual convention, he presented his comprehensive theory about the genetic causes of schizophrenia. This conflicted with the prevailing notion that schizophrenia was primarily the result of a person's childhood rearing environment. Meehl argued schizophrenia should be considered a genetically based neurological disorder manifesting via complex interactions with personal and environmental factors. His reasoning was shaped by the writings of psychoanalyst Sandor Rado as well as the behavioral genetics findings at the time. He proposed that existing psychodynamic theory about schizophrenia could be meaningfully integrated into his neurobiological framework for the disorder.
Meehl hypothesized the existence of an autosomal dominant schizogene widespread throughout the population, which would function as a necessary, but not sufficient, condition for schizophrenia. The schizogene would manifest on the cellular level throughout the central nervous system and should be observed as a functional control aberration called hypokrisia. Cells exhibiting hypokrisia should contribute to a characteristic pattern of impaired integrative signal processing across multiple neural circuits in the brain, which Meehl termed "schizotaxia". In response to typical rearing environments and social reinforcement schedules, this neural aberration should invariably lead to a collection of observable behavioral tendencies called "schizotypy". Schizotypy indicators would include neurological soft signs, subtle differences in language usage ("cognitive slippage"), and effects on personality and emotion. Meehl believed many people in society exhibit signs of schizotypy as a result of the schizogene without showing signs of schizophrenia. Schizophrenia would only occur when individuals are carrying other non-specific genetic risk factors ("polygenic potentiators") relevant for traits such as anhedonia, ambivalence, and social fear. These additional traits would be more likely expressed under stress (e.g., trauma) and inconsistent social schedules from parents. Given these combinations of conditions, decompensation from schizotypy to schizophrenia would result.
Meehl's dominant schizogene theory had a substantial influence on subsequent research efforts.[15] His theorizing increased interest in longitudinal study of individuals at risk for psychosis and family members of people with schizophrenia who may be carrying the schizogene.[16] Meehl's descriptions of schizophrenia as largely a neurological phenomenon and schizotypy as a genetically based risk factor for schizophrenia have been supported.[17] However, researchers have not uncovered strong evidence for a single schizogene, and instead believe the genetic risk for schizophrenia is better explained by polygenic combinations of common variants and rare genetic mutations.[18] [19]
With the help of several colleagues, Meehl developed multiple statistical methods for identifying the presence of categorical groupings within biological or psychological variables.[20] Meehl was a critic of the checklist ("polythetic") structure used to categorize mental illnesses in diagnostic manuals such as the DSM-III. Although many DSM-defined psychiatric syndromes can be reliability identified in clinical settings, Meehl argued that the categorical nature of mental illnesses assumed by these diagnoses (i.e., a person is either sick or well) should be tested empirically rather than accepted at face value. Meehl advocated for a data-driven approach that could, in the words of Plato, "carve nature at its joints", and determine when it is most appropriate to conceptualize something as being categorical or continuous/dimensional.
In his writings, Meehl advocated for the creation of a field called "taxometrics" to test for categorical groupings across diverse scientific disciplines. Based on this approach, latent "taxons" would be conceptualized as causal factors leading to true differences in kind within a population. Taxons could include many types of biological and psychosocial phenomena such as expression of an autosomal dominant gene (e.g., Huntington's disease), biological sex, or indoctrination into a highly homogenous religious sect. Meehl envisioned applying taxometric approaches when the precise underlying latent causes are currently unknown and only observable "indicators" are available (e.g., psychiatric conditions). By mathematically examining patterns across these manifested indicators, Meehl proposed that converging evidence could be used to assess the plausibility of a true latent taxon while also estimating the base rate of that taxon.
Coherent Cut Kinetics is the suite of statistical tools developed by Meehl and his colleagues to perform taxometric analysis. "Cut Kinetics" refers to the mathematical operation of moving potential cut points across distributions of indicator variables to create subsamples using dichotomous splits. Then, several metrics can be applied to assess if the candidate cut points can be explained by a latent taxon. "Coherent" refers to the process of using multiple indicators and metrics together to make a case for convergence about the categorical or dimensional nature of the phenomenon being studied. Meehl played a role in developing the following taxometric procedures: MAMBAC, MAXCOV, MAXSLOPE,[21] MAXEIG, and L-Mode.
Taxometric analyses have contributed to a shift away from the use of diagnostic categories among mental health researchers.[22] In line with Meehl's theorizing, studies using taxometric methods have demonstrated how most psychiatric conditions are better conceptualized as being dimensional rather than categorical[23] (e.g., psychopathy,[24] [25] posttraumatic stress disorder,[26] and clinical depression[27]). However, some possible exceptions have been identified such as a latent taxon representing the tendency to experience maladaptive dissociative states.[28] Since Meehl's death, factor mixture modeling has been proposed as an alternative to address the statistical weaknesses of his taxometric methods.[29]
Meehl practiced as a licensed and board-certified clinical psychologist throughout his career. In 1958, Meehl performed psychoanalysis on Saul Bellow while Bellow was an instructor at the University of Minnesota.[30] He identified as "strongly psychodynamic in theoretical orientation", and used a combination of psychoanalysis and rational emotive therapy.
In 1973, Paul Meehl published the polemic "Why I Do Not Attend Case Conferences".[31] He discussed his avoidance of case conferences in mental health clinics, where individual patients, or "cases", are discussed at length by a team, often as a training exercise. Meehl found such case conferences boring and lacking intellectual rigor. In contrast, he recalled numerous interesting illuminating case conferences within internal medicine or neurology departments, which often centered around pathologist reports and objective data about patients' pathophysiology. In other words, case conferences outside mental health disciplines were benefiting from including objective evidence against which clinical expertise could be compared and contrasted. Meehl argued for creating a psychiatric analogue to the pathologist's report. Additionally, he outlined a proposed format for case conferences beginning with initial discussion of clinical observations, and ending with a revealing of a subset of patient data (e.g., psychological testing results) to compare with attendees' clinical inferences and proposed diagnoses.
Meehl also elaborated upon the issue of clinical versus statistical prediction and the known weakness of unstructured clinical decision-making during typical case conferences. He encouraged clinicians to be humble when collaborating about patient care and pushed for a higher scientific standard for clinical reasoning in mental health treatment settings. Meehl directly identified several common deficiencies in reasoning that he had observed among his clinical colleagues, and to which he applied memorable names:
According to Faust[33] "One of [Meehl's] most important, but less widely known potential contributions is the co-development, and the extension and elaboration of meta-science, or the science of science." Meehl coined the term 'cliometric metatheory',[34] where meta-theory is defined as the empirical theory of scientific theorizing. He published several articles criticizing the weak use of hypothesis tests.[35] Together with Lykken he coined the term 'crud factor' which expresses the idea that "everything is more or less correlated with everything in the social sciences", which makes null hypothesis tests for correlational effects uninteresting. He also discussed better approaches to tests of theories in psychological science based on the work by Imre Lakatos.