Verificationism, also known as the verification principle or the verifiability criterion of meaning, is a doctrine in philosophy which asserts that a statement is meaningful only if it is either empirically verifiable (can be confirmed through the senses) or a tautology (true by virtue of its own meaning or its own logical form). Verificationism rejects statements of metaphysics, theology, ethics and aesthetics as meaningless in conveying truth value or factual content, though they may be meaningful in influencing emotions or behavior.[1]
Verificationism was a central thesis of logical positivism, a movement in analytic philosophy that emerged in the 1920s by philosophers who sought to unify philosophy and science under a common naturalistic theory of knowledge.[2] The verifiability criterion underwent various revisions throughout the 1920s to 1950s. However, by the 1960s, it was deemed to be irreparably untenable. Its abandonment would eventually precipitate the collapse of the broader logical positivist movement.
The roots of verificationism may be traced to at least the 19th century, in philosophical principles that aim to ground scientific theory in verifiable experience, such as C.S. Peirce's pragmatism and the work of conventionalist Pierre Duhem, who fostered instrumentalism.[3] Verificationism, as principle, would be conceived in the 1920s by the logical positivists of the Vienna Circle, who sought an epistemology whereby philosophical discourse would be, in their perception, as authoritative and meaningful as empirical science.[4] The movement established grounding in the empiricism of David Hume,[5] Auguste Comte and Ernst Mach, and the positivism of the latter two, borrowing perspectives from Immanuel Kant and defining their exemplar of science in Einstein's general theory of relativity.[6]
Ludwig Wittgenstein's Tractatus, published in 1921, established the theoretical foundations for the verifiability criterion of meaning.[7] Building upon Gottlob Frege's work, the analytic–synthetic distinction was also reformulated, reducing logic and mathematics to semantical conventions. This would render logical truths (being unverifiable by the senses) tenable under verificationism, as tautologies.[8]
Logical positivists within the Vienna Circle recognized quickly that the verifiability criterion was too stringent. Specifically, universal generalizations were noted to be empirically unverifiable, rendering vital domains of science and reason, including scientific hypothesis, meaningless under verificationism, absent revisions to its criterion of meaning.[9]
Rudolf Carnap, Otto Neurath, Hans Hahn and Philipp Frank led a faction seeking to make the verifiability criterion more inclusive, beginning a movement they referred to as the "liberalization of empiricism". Moritz Schlick and Friedrich Waismann led a "conservative wing" that maintained a strict verificationism. Whereas Schlick sought to redefine universal generalizations as tautological rules, thereby to reconcile them with the existing criterion, Hahn argued that the criterion itself should be weakened to accommodate non-conclusive verification.[10] Neurath, within the liberal wing, proposed the adoption of coherentism, though challenged by Schlick's foundationalism. However, his physicalism would eventually be adopted over Mach's phenomenalism by most members of the Vienna Circle.[9] [11]
In 1936, Carnap sought a switch from verification to confirmation.[9] Carnap's confirmability criterion (confirmationism) would not require conclusive verification (thus accommodating for universal generalizations) but allow for partial testability to establish degrees of confirmation on a probabilistic basis. Carnap never succeeded in finalising his thesis despite employing abundant logical and mathematical tools for this purpose. In all of Carnap's formulations, a universal law's degree of confirmation was zero.[12]
In Language, Truth and Logic, published that year, A. J. Ayer distinguished between strong and weak verification. This system espoused conclusive verification, yet allowed for probabilistic inclusion where verifiability is inconclusive. He also distinguished theoretical from practical verifiability, proposing that statements that are verifiable in principle should be meaningful, even if unverifiable in practice.[13] [14]
Philosopher Karl Popper, a graduate of the University of Vienna, though not a member within the ranks of the Vienna Circle, was among the foremost critics of verificationism. He identified three fundamental deficiencies in verifiability as a criterion of meaning:
Popper regarded scientific hypotheses to never be completely verifiable, as well as not confirmable under Carnap's thesis.[15] He also considered metaphysical, ethical and aesthetic statements often rich in meaning and important in the origination of scientific theories.
Other philosophers also voiced their own criticisms of verificationism:
See main article: Falsifiability.
In The Logic of Scientific Discovery (1959), Popper proposed falsifiability, or falsificationism. Though formulated in the context of what he perceived were intractable problems in both verifiability and confirmability, Popper intended falsifiability, not as a criterion of meaning like verificationism (as commonly misunderstood),[21] but as a criterion to demarcate scientific statements from non-scientific statements.
Notably, the falsifiability criterion would allow for scientific hypotheses (expressed as universal generalizations) to be held as provisionally true until proven false by observation, whereas under verificationism, they would be disqualified immediately as meaningless.
In formulating his criterion, Popper was informed by the contrasting methodologies of Albert Einstein and Sigmund Freud. Appealing to the general theory of relativity and its predicted effects on gravitational lensing, it was evident to Popper that Einstein's theories carried significantly greater predictive risk than Freud's of being falsified by observation. Though Freud found ample confirmation of his theories in observations, Popper would note that this method of justification was vulnerable to confirmation bias, leading in some cases to contradictory outcomes. He would therefore conclude that predictive risk, or falsifiability, should serve as the criterion to demarcate the boundaries of science.[22]
Though falsificationism has been criticized extensively by philosophers for methodological shortcomings in its intended demarcation of science,[23] it would receive acclamatory adoption among scientists.[15] Logical positivists too adopted the criterion, even as their movement ran its course, catapulting Popper, initially a contentious misfit, to carry the richest philosophy out of interwar Vienna.[21]
In 1967, John Passmore, a leading historian of 20th-century philosophy, wrote, "Logical positivism is dead, or as dead as a philosophical movement ever becomes".[24] Logical positivism's fall heralded postpositivism, where Popper's view of human knowledge as hypothetical, continually growing and open to change ascended and verificationism, in academic circles, became mostly maligned.
In a 1976 TV interview, A. J. Ayer, who had introduced logical positivism to the English-speaking world in the 1930s[25] was asked what he saw as its main defects, and answered that "nearly all of it was false".[24] However, he soon said that he still held "the same general approach", referring to empiricism and reductionism, whereby mental phenomena resolve to the material or physical and philosophical questions largely resolve to ones of language and meaning.[24] In 1977, Ayer had noted:
In the late 20th and early 21st centuries, the general concept of verification criteria—in forms that differed from those of the logical positivists—was defended by Bas van Fraassen, Michael Dummett, Crispin Wright, Christopher Peacocke, David Wiggins, Richard Rorty, and others.[26]