Verificationism, also known as the verification principle or the verifiability criterion of meaning, is the philosophical doctrine which asserts that a statement is meaningful only if it is either empirically verifiable (i.e. confirmed through the senses) or a truth of logic (e.g., tautologies).
Verificationism rejects statements of metaphysics, theology, ethics, and aesthetics, as cognitively meaningless.[1] [2] Such statements may be meaningful in influencing emotions or behavior, but not in terms of conveying truth value, information, or factual content.[3] Verificationism was a central thesis of logical positivism, a movement in analytic philosophy that emerged in the 1920s by philosophers who sought to unify philosophy and science under a common naturalistic theory of knowledge.
Although earlier philosophical principles which aim to ground scientific theory in some verifiable experience are found within the work of American pragmatist C.S. Peirce and that of French conventionalist Pierre Duhem, who fostered instrumentalism,[4] the project of verificationism was launched by the logical positivists who, emerging from the Berlin Circle and the Vienna Circle in the 1920s, sought an epistemology whereby philosophical discourse would be, in their perception, as authoritative and meaningful as empirical science.
Logical positivists garnered the verifiability criterion of cognitive meaningfulness from Ludwig Wittgenstein's philosophy of language posed in his 1921 book Tractatus,[5] and, led by Bertrand Russell, sought to reformulate the analytic–synthetic distinction in a way that would reduce mathematics and logic to semantical conventions. This would be pivotal to verificationism, in that logic and mathematics would otherwise be classified as synthetic a priori knowledge and defined as "meaningless" under verificationism.
Seeking grounding in such empiricism as of David Hume,[6] Auguste Comte, and Ernst Mach—along with the positivism of the latter two—they borrowed some perspectives from Immanuel Kant, and found the exemplar of science to be Albert Einstein's general theory of relativity.
Logical positivists within the Vienna Circle recognized quickly that the verifiability criterion was too stringent. Notably, universal generalizations are empirically unverifiable, such that, under verificationism, vast domains of science and reason, such as scientific hypothesis, would be rendered meaningless.[7]
Rudolf Carnap, Otto Neurath, Hans Hahn and Philipp Frank led a faction seeking to make the verifiability criterion more inclusive, beginning a movement they referred to as the "liberalization of empiricism". Moritz Schlick and Friedrich Waismann led a "conservative wing" that maintained a strict verificationism. Whereas Schlick sought to reduce universal generalizations to frameworks of 'rules' from which verifiable statements can be derived,[8] Hahn argued that the verifiability criterion should accommodate to less-than-conclusive verifiability.[9] Among other ideas espoused by the liberalization movement were physicalism, over Mach's phenomenalism, coherentism over foundationalism, as well as pragmatism and fallibilism.[7] [10]
In 1936, Carnap sought a switch from verification to confirmation.[7] Carnap's confirmability criterion (confirmationism) would not require conclusive verification (thus accommodating for universal generalizations) but allow for partial testability to establish "degrees of confirmation" on a probabilistic basis. Carnap never succeeded in formalizing his thesis despite employing abundant logical and mathematical tools for this purpose. In all of Carnap's formulations, a universal law's degree of confirmation is zero.[11]
That same year saw the publication of A. J. Ayer's work, Language, Truth and Logic, in which he proposed two types of verification: strong and weak. This system espoused conclusive verification, yet accommodated for probabilistic inclusion where verifiability is inconclusive. Ayer also distinguished between practical and theoretical verifiability. Under the latter, propositions that cannot be verified in practice would still be meaningful if they can be verified in principle.[12] [13]
Philosopher Karl Popper, a graduate of the University of Vienna, though not a member within the ranks of the Vienna Circle, was among the foremost critics of verificationism. He identified three fundamental deficiencies in verifiability as a criterion of meaning:
Popper regarded scientific hypotheses to be never completely verifiable, as well as not "confirmable" under Rudolf Carnap's thesis.[14] Popper also found that some non-scientific, metaphysical, ethical and aesthetic statements were, indeed, rich in meaning and important in the origination of scientific theories.
Other philosophers also voiced their own criticisms of verificationism:
See main article: Falsifiability.
In The Logic of Scientific Discovery (1959), Popper proposed falsifiability, or falsificationism. Though formulated in the context of what he perceived were intractable problems in both verifiability and confirmability, Popper intended falsifiability, not as a criterion of meaning like verificationism (as commonly misunderstood),[20] but as a criterion to demarcate scientific statements from non-scientific statements.
Notably, the falsifiability criterion would allow for scientific hypotheses (expressed as universal generalizations) to be held as provisionally true until proven false by observation, whereas under verificationism they would be disqualified immediately as meaningless.
In formulating his criterion, Popper was informed by the contrasting approaches of Albert Einstein and Sigmund Freud. Popper noticed that Einstein sought out data that would disprove his theories. He made predictions about future instances based upon the past, and then tried to learn more to test the validity of his hypothesis. Freud, on the other hand, selected data that could be shaped to fit his theories, and his theories were crafted to explain the past, not the future. For Popper, this clarified a key difference between science and pseudoscience.[21]
Though falsificationism has been criticized extensively by philosophers for methodological shortcomings in its intended demarcation of science,[22] Popper has been the only philosopher of science often praised by many scientists.[14] Despite its problems, Popper's criterion was to ensure that scientific theory was henceforth to be anchored in empiricism and logical positivists adopted Popper's criterion of falsifiability, catapulting Popper, initially a contentious misfit, to carry the richest philosophy out of interwar Vienna.[20]
In 1967, John Passmore, a leading historian of 20th-century philosophy, wrote, "Logical positivism is dead, or as dead as a philosophical movement ever becomes".[23] Logical positivism's fall heralded postpositivism, where Popper's view of human knowledge as hypothetical, continually growing and open to change ascended and verificationism, in academic circles, became mostly maligned.
In a 1976 TV interview, A. J. Ayer, who had introduced logical positivism to the English-speaking world in the 1930s,[24] was asked what he saw as its main defects, and answered that "nearly all of it was false".[23] However, he soon said that he still held "the same general approach", referring to empiricism and reductionism, whereby mental phenomena resolve to the material or physical and philosophical questions largely resolve to ones of language and meaning.[23] In 1977, Ayer recognized that the verification principle was not widely accepted but acknowledged that it still held relevance and was being utilised. "The attitude of many philosophers reminds me of the relationship between Pip and Magwitch in Dickens's Great Expectations. They have lived on the money, but are ashamed to acknowledge its source".
In the late 20th and early 21st centuries, the general concept of verification criteria—in forms that differed from those of the logical positivists—was defended by Bas van Fraassen, Michael Dummett, Crispin Wright, Christopher Peacocke, David Wiggins, Richard Rorty, and others.[25]