Tuesday, September 22, 2020

Scientific Realism and Antirealism

Debates about scientific realism concern the extent to which we are entitled to hope or believe that science will tell us what the world is really like. Realists tend to be optimistic; antirealists do not. To a first approximation, scientific realism is the view that well-confirmed scientific theories are approximately true; the entities they postulate do exist; and we have good reason to believe their main tenets. Realists often add that, given the spectacular predictive, engineering, and theoretical successes of our best scientific theories, it would be miraculous were they not to be approximately correct. This natural line of thought has an honorable pedigree yet has been subject to philosophical dispute since modern science began.

 

In the 1970s, a particularly strong form of scientific realism was advocated by Putnam, Boyd, and others. When scientific realism is mentioned in the literature, usually some version of this is intended. It is often characterized in terms of these commitments:

 

·         Science aims to give a literally true account of the world.

·         To accept a theory is to believe it is (approximately) true.

·         There is a determinate mind-independent and language-independent world.

·         Theories are literally true (when they are) partly because their concepts “latch on to” or correspond to real properties (natural kinds, and the like) that causally underpin successful usage of the concepts.

·         The progress of science asymptotically converges on a true account.

 

 

Brief History before the 19th Century

The debate begins with modern science. Bellarmine advocated an antirealist interpretation of Copernicus’s heliocentrism—as a useful instrument that saved the phenomena—whereas Galileo advocated a realist interpretation—the planets really do orbit the sun. More generally, 17th century protagonists of the new sciences advocated a metaphysical picture: nature is not what it appears to our senses—it is a world of objects (Descartes’ matter-extension, Boyle’s corpuscles, Huygens’ atoms, and so forth) whose primary properties (Cartesian extension, or the sizes, shapes, and hardness of atoms and corpuscles, and/or forces of attraction or repulsion, and so forth) are causally responsible for the phenomena we observe. The task of science is “to strip reality of the appearances covering it like a veil, in order to see the bare reality itself” (Duhem 1991).

 

This metaphysical picture quickly led to empiricist scruples, voiced by Berkeley and Hume. If all knowledge must be traced to the senses, how can we have reason to believe scientific theories, given that reality lies behind the appearances (hidden by a veil of perception)? Indeed, if all content must be traced to the senses, how can we even understand such theories? The new science seems to postulate “hidden” causal powers without a legitimate epistemological or semantic grounding. A central problem for empiricists becomes that of drawing a line between objectionable metaphysics and legitimate science (portions of which seem to be as removed from experience as metaphysics seems to be). Kant attempted to circumvent this problem and find a philosophical home for Newtonian physics. He rejected both a veil of perception and the possibility of our representing the noumenal reality lying behind it. The possibility of making judgments depends on our having structured what is given: experience of x qua object requires that x be represented in space and time, and judgments about x require that x be located in a framework of concepts. What is real and judgable is just what is empirically real—what fits our system of representation in the right way—and there is no need for, and no possibility of, problematic inferences to noumenal goings-on. In pursuing this project Kant committed himself to several claims about space and time—in particular that space must be Euclidean, which he regarded as both a priori (because a condition of the possibility of our experience of objects) and synthetic (because not derivable from analytical equivalences)—which became increasingly problematic as 19th century science and mathematics advanced.

 

2. The 19th Century Debate

Many features of the contemporary debates were fashioned in 19th century disputes about the nature of space and the reality of forces and atoms. The principals of these debates—Duhem, Helmholtz, Hertz, Kelvin, Mach, Maxwell, Planck, and Poincaré—were primarily philosopher-physicists. Their separation into realists and antirealists is complicated, but Helmholtz, Hertz, Kelvin, Maxwell, and Planck had realist sympathies and Duhem, Mach, and Poincaré had antirealist doubts.

 

a. Poincaré’s Conventionalism

By the late 19th century several consistent non-Euclidean geometries, mathematically distinct from Euclidean geometry, had been developed. Euclidean geometry has a unique parallels axiom and angle sum of triangles equals 180º, whereas, for example, spherical geometry has a zero-parallel axiom and angle sum of triangles greater than or equal to 180º. These geometries raise the possibility that physical space could be non-Euclidean. Empiricists think we can determine whether physical space is Euclidean through experiments. For example, Gauss allegedly attempted to measure the angles of a triangle between three mountaintops to test whether physical space is Euclidean. Realists think physical space has some determinate geometrical character even if we cannot discover what character it has. Kantians think that physical space must be Euclidean because only Euclidean geometry is consistent with the form of our sensibility.

 

Poincaré (1913) argued that empiricists, realists, and Kantians are wrong: the geometry of physical space is not empirically determinable, factual, or synthetic a priori. Suppose Gauss’s experiment gave the angle-sum of a triangle as 180º. This would support the hypothesis that physical space is Euclidean only under certain presuppositions about the coordination of optics with geometry: that the shortest path of an undisturbed light ray is a Euclidean straight line. Instead, for example, the 180º measurement could also be accommodated by presupposing that light rays traverse shortest paths in spherical space but are disturbed by a force, so that physical space is “really” non-Euclidean: the true angle-sum of the triangle is greater than 180º, but the disturbing force makes it “appear” that space is Euclidean and the angle-sum of the triangle is 180º.

 

Arguing that there is no fact of the matter about the geometry of physical space. Poincaré proposed conventionalism: we decide conventionally that geometry is Euclidean, forces are Newtonian, light travels in Euclidean straight lines, and we see if experimental results will fit those conventions. Conventionalism is not an “anything-goes” doctrine—not all stipulations will accommodate the evidence—it is the claim that the physical meaning of measurements and evidence is determined by conventionally adopted frameworks. Measurements of lines and angles typically rely on the hypothesis that light travels shortest paths. But this lacks physical meaning unless we decide whether shortest paths are Euclidean or non-Euclidean. These conventions cannot be experimentally refuted or confirmed since experiments only have physical meaning relative to them. Which group of conventions we adopt depends on pragmatic factors: other things being equal, we choose conventions that make physics simpler, more tractable, more familiar, and so forth. Poincaré, for example, held that, because of its simplicity, we would never give up Euclidean geometry.

 

b. The Reality of Forces and Atoms

Ever since Newton, a certain realist ideal of science was influential: a theory that would explain all phenomena as the effects of moving atoms subject to forces. By the 1880s many physicists came to doubt the attainability of this ideal since classical mechanics lacked the tools to describe a host of terrestrial phenomena: “visualizable” atoms that are subject to position-dependent central forces (so successful for representing celestial phenomena) were ill-suited for representing electromagnetic phenomena, “dissipative” phenomena in heat engines and chemical reactions, and so forth. The concepts of atom and force became questionable. The kinetic theory of gases lent support to atomism, yet no consistent models could be found (for example, spectroscopic phenomena required atoms to vibrate while specific heat phenomena required them to be rigid). Moreover, intermolecular forces allowing for internal vibration and deformation could not be easily conceptualized as Newtonian central forces. Newtonian action-at-a-distance forces also came under pressure with the increasing acceptance of Maxwell’s theory of electromagnetism, which attributed electromagnetic phenomena to polarizations in a dielectric medium propagated by contiguous action. Many thought that physics had become a disorganized patchwork of poorly understood theories, lacking coherence, unity, empirical determinacy, and adequate foundations. As a result, physicists became increasingly preoccupied with foundational efforts to put their house in order. The most promising physics required general analytical principles (for example, conservation of energy and action, Hamilton’s principle) that could not be derived from Newtonian laws governing systems of classical atoms. The abstract concepts (action, energy, generalized potential, entropy, absolute temperature) needed to construct these principles could not be built from the ordinary intuitive concepts of classical mechanics. They could, however, be developed without recourse to “hidden mechanisms” and independently of specific hypotheses about the reality underlying the phenomena. Most physicists continued to be realists: they believed in a deeper reality underlying the phenomena that physics can meaningfully investigate; for them, the pressing foundational problem was to articulate the concepts and develop the laws that applied to that reality. But some physicists became antirealists. Some espoused local antirealism (antirealist about some kinds of entities, as Hertz (1956) was about forces, while not espousing antirealism about physics generally).

 

c. The Aim of Science: Causal Explanation or Abstract Representation?

Others espoused global antirealism. Like contemporary antirealists, they questioned the relationship among physics, common sense and metaphysics, the aims and methods of science, and the extent to which science, qua attempt to fathom the depth and extent of the universe, is bankrupt. While their realist colleagues hoped for a unified, explanatorily complete, fundamental theory as the proper aim of science, these global antirealists argued on historical grounds that physics had evolved into its current disorganized mess because it had been driven by the unattainable metaphysical goal of causal explanation. Instead, they proposed freeing physics from metaphysics, and they pursued phenomenological theories, like thermodynamics and energetics, which promised to provide abstract, mathematical organizations of the phenomena without inquiring into their causes. To justify this pursuit philosophically, they proposed a re-conceptualization of the aim and scope of physics that would bring order and clarity to science and be attainable. The aim of science is: economy of thought (science is a useful instrument without literal significance (Mach 1893)), the discovery of real relations between hidden entities underlying the phenomena (Poincaré 1913), and the discovery of a “natural classification” of the phenomena (a mathematical organization of the phenomena that is the reflection of a hidden ontological order (Duhem 1991)). These affinities, between 19th century global antirealism and 20th century antirealism, mask fundamental differences. The former is driven by methodological considerations concerning the proper way to do physics whereas the latter is driven by traditional metaphysical or epistemological concerns (about the meaningfulness and credibility of claims about goings-on behind the veil of appearances).

 

3. Logical Positivism

Logical positivism began in Vienna and Berlin in the 1910s and 1920s and migrated to America after 1933, when many of its proponents fled Nazism. The entire post-1960 conversation about scientific realism can be viewed as a response to logical positivism. More a movement than a position, the positivists adopted a set of philosophical stances: pro-science (including pro-verification and pro-observation) and anti-metaphysics (including anti-cause, anti-explanation, anti-theoretical entities). They are positivists because of their pro-science stance; they are logical positivists because they embraced and used the formal logic techniques developed by Frege, Russell, and Wittgenstein to clarify scientific and philosophical language.

 

a. General Background

As physics developed in the early 20th century, many of the 19th century methodological worries sorted themselves out: Perrin’s experiments with Brownian motion persuaded most of the reality of atoms; special relativity unified mechanics and electromagnetism and signaled the demise of traditional mechanism; general relativity further unified gravity with special relativity; quantum mechanics produced an account of the microscopic world that allowed atoms to vibrate and was spectacularly supported empirically. Moreover, scientific developments undermined several theses formerly taken as necessarily true. Einstein’s famous analysis of absolute simultaneity showed that Newtonian absolute space and time were incorrect and had to be replaced by the space-time structure of Special Relativity. His Theory of General Relativity introduced an even stranger notion of space-time: a space-time with a non-Euclidean structure of variable curvature. This undermined Kant’s claims that space has to be Euclidean and that there is synthetic a priori knowledge. Moreover, quantum mechanics, despite its empirical success, led to its own problems, since quantum particles have strange properties—they cannot have both determinate position and momentum at a given time, for example—and the quantum world has no unproblematic interpretation. So, though everyone was converted to atomism, no one understood what atoms were.

 

Logical positivism developed within this scientific context. Nowadays the positivists are often depicted as reactionaries who developed a crude, ahistorical philosophical viewpoint with pernicious consequences (Kuhn 1970, Kitcher 1993). In their day, however, they were revolutionaries, attempting to come to grips with the profound changes that Einstein’s relativity and Bohr’s quantum mechanics had wrought on the worldview of classical physics and to provide firm logical foundations for all science.

 

Logical positivism’s philosophical ancestry used to be traced to Hume’s empiricism (Putnam 1962, Quine 1969). On this interpretation, the positivist project provides epistemological foundations for problematic sentences of science that purport to describe unobservable realities, such as electrons, by reducing sentences employing these concepts to unproblematic sentences that describe only observable realities. Friedman (1999) offers a different Kantian interpretation: their project provides objective content for science, as Kant had attempted, by showing how it organizes our experience into a structured world of objects, but without commitment to scientifically outdated aspects of Kant’s apparatus, such as synthetic a priori truths or the necessity of Euclidean geometry. Whichever interpretation is correct, the logical positivists clearly began with traditional veil-of-perception worries (§1) and insisted on a distinction that both Hume and Kant advocated—between meaningful science and meaningless metaphysics.

 

b. The Logical Part of Logical Positivism

This distinction rests on their verificationist theory of meaning, according to which the meaning of a sentence is its verification conditions and understanding a sentence is knowing its verification conditions. For example, knowing the meaning of “This is blue” is being able to pick out the object referred to by “this” and to check that it is blue. While this works only for simple sentences built from terms that directly pick out their referents and predicates with directly verifiable content, it can be extended to other sentences. To understand “No emerald is blue” one need only know the verification conditions for “This is an emerald”, “This is blue” and the logical relations of such sentences to “No emerald is blue” (for example, that “no emerald is blue” implies “if this is an emerald, then this is not blue”, and so forth). Simple verification conditions plus some logical knowledge buys a lot. But it does not buy enough. For example, what are the verification conditions expressed by “This is an electron”,  where “this” does not pick out an ostendible object and where “is an electron” does not have directly verifiable content?

 

To deal with this, the positivists, especially Carnap, hit upon an ingenious program. First, they distinguished two kinds of linguistic terms: observational terms (O-terms), like “is blue”, which have relatively unproblematic, directly verifiable content, and theoretical terms (T-terms), like “is an electron”, which have more problematic content that is not directly verifiable. Second, they proposed to indirectly interpret the T-terms, using logical techniques inherited from Frege and Russell, by deductively connecting them within a theory to the directly interpreted O-terms. If each T-term could be explicitly defined using only O-terms, just as “x is a bachelor” can be defined as “x is an unmarried male human”, then one would understand the verification conditions for a T-term just by understanding the directly verifiable content of the O-terms used to define it, and a theory’s theoretical content would be just its observational content.

 

Unfortunately, the content of “is an electron” is open-ended and outstrips observational content so that no explicit definition of it in terms of a finite list of O-terms can be given in first-order logic. From the 1930s to the 1950s, Carnap (1936, 1937, 1939, 1950, 1956) struggled with this problem by using ever more elaborate logical techniques. He eventually settled for a less ambitious account: the meaning of a T-term is given by the logical role it plays in a theory (Carnap 1939). Although T-terms cannot be explicitly defined in first-order logic, the totality of their logical connections within the theory to other T-terms and O-terms specifies their meaning. Intuitively, the meaning of a theoretical term like “electron” is specified by: “electron” means “the thing x that plays the Θ-role”, where Θ is the theory of electrons. (This idea can be rendered precisely in second-order logic by a “Ramseyified” definition: “electron” means “the thing x such that Θ(x)”, where “Θ(x)” is the result of taking the theory of electrons Θ (understood as the conjunction of a set of sentences) and replacing all occurrences of “is an electron” with the (second-order) variable “x” (Lewis 1970).

 

Two features of this theory of meaning lay groundwork for later discussion. First, the meaning of any T-term is theory-relative since it is determined by the term’s deductive connections within a theory. Second, the positivists distinguished analytic truths (sentences true in virtue of meaning) and synthetic truths (sentences true in virtue of fact). “All bachelors are unmarried” and “All electrons have the property of being the x such that Θ(x)” are analytic truths, whereas “Kant was a bachelor” and “Electrons exist” are synthetic truths. The positivists inherited this distinction from Kant, but, unlike Kant, they rejected synthetic a priori truths. For them, there are only analytic a priori truths (all pure mathematics, for example) and synthetic a posteriori truths (all statements to the effect that a given claim is verified).

 

c. The Positivism Part of Logical Positivism

The positivists distinguished legitimate positive science, whose aim is to organize and predict observable phenomena, from illegitimate metaphysics, whose aim is to causally explain those phenomena in terms of underlying unobservable processes. We should restrict scientific attention to the phenomena we can know and banish unintelligible speculation about what lies behind the veil of appearances. This distinction rests on the observational-theoretical distinction (§3b): scientific sentences (even theoretical ones like “Electrons exist”) have meaningful verifiable content; sentences of metaphysics (like “God exists”) have no verifiable content and are meaningless.

 

Because of their hostility to metaphysics, the positivists “diluted” various concepts that have a metaphysical ring. For example, they replaced explanations in terms of causal powers with explanations in terms of law-like regularities so that “causal” explanations become arguments. According to the deductive-nomological (DN) model of explanation, pioneered by Hempel (1965), “Event b occurred because event a occurred” is elliptical for an argument like: “a is an event of kind A, b is an event of kind B, and if any A-event occurs, a B-event will occur; a occurred; therefore b occurred”. The explanandum logically follows from the explanantia, one of which is a law-like regularity.

 

Because they advocated a non-literal interpretation of theories, the positivists are considered to be antirealists. Nevertheless, they do not deny the existence or reality of electrons: for them, to say that electrons exist or are real is merely to say that the concept electron stands in a definite logical relationship to observable conditions in a structured system of representations. What they deny is a certain metaphysical interpretation of such claims—that electrons exist underlying and causing but completely transcending our experience. It is not that physical objects are fictions; rather, all there is to being a real physical object is its empirical reality—its system of relations to verifiable experience.

 

No comments:

Post a Comment