Identity theory is a family of views on the
relationship between mind and body. Type Identity theories hold that at least
some types (or kinds, or classes) of mental states are, as a matter of
contingent fact, literally identical with some types (or kinds, or classes) of
brain states. The earliest advocates of Type Identity—U.T. Place, Herbert
Feigl, and J.J.C. Smart, respectively—each proposed their own version of the
theory in the late 1950s to early 60s. But it was not until David Armstrong
made the radical claim that all mental states (including intentional ones) are
identical with physical states, that philosophers of mind divided themselves into
camps over the issue.
Over the years, numerous objections have been levied
against Type Identity, ranging from epistemological complaints to charges of
Leibniz’s Law violations to Hilary Putnam’s famous pronouncement that mental
states are in fact capable of being “multiply realized.” Defenders of Type
Identity have come up with two basic strategies in response to Putnam’s claim:
they restrict type identity claims to particular species or structures, or else
they extend such claims to allow for the possiblity of disjunctive physical
kinds. To this day, debate concerning the validity of these strategies—and the
truth of Mind-Brain Type Identity—rages in the philosophical literature.
1.
Early Versions of the Theory
Place accepted the Logical Behaviorists’ dispositional
analysis of cognitive and volitional concepts. With respect to those mental
concepts “clustering around the notions of consciousness, experience,
sensation, and mental imagery,” however, he held that no behavioristic account
(even in terms of unfulfilled dispositions to behave) would suffice. Seeking an
alternative to the classic dualist position, according to which mental states
possess an ontology distinct from the physiological states with which they are
thought to be correlated, Place claimed that sensations and the like might very
well be processes in the brain—despite the fact that statements about the
former cannot be logically analyzed into statements about the latter. Drawing
an analogy with such scientifically verifiable (and obviously contingent)
statements as “Lightning is a motion of electric charges,” Place cited
potential explanatory power as the reason for hypothesizing consciousness-brain
state relations in terms of identity rather than mere correlation. This still
left the problem of explaining introspective reports in terms of brain
processes, since these reports (for example, of a green after-image) typically
make reference to entities which do not fit with the physicalist picture (there
is nothing green in the brain, for example). To solve this problem, Place
called attention to the “phenomenological fallacy“—the mistaken assumption that
one’s introspective observations report “the actual state of affairs in some
mysterious internal environment.” All that the Mind-Brain Identity theorist
need do to adequately explain a subject’s introspective observation, according
to Place, is show that the brain process causing the subject to describe his
experience in this particular way is the kind of process which normally occurs
when there is actually something in the environment corresponding to his
description.
At least in the beginning, J.J.C. Smart followed U.T.
Place in applying the Identity Theory only to those mental concepts considered
resistant to behaviorist treatment, notably sensations. Because of the proposed
identification of sensations with states of the central nervous system, this limited
version of Mind-Brain Type Identity also became known as Central-State
Materialism. Smart’s main concern was the analysis of sensation-reports (e.g.
“I see a green after-image”) into what he described, following Gilbert Ryle, as
“topic-neutral” language (roughly, “There is something going on which is like
what is going on when I have my eyes open, am awake, and there is something
green illuminated in front of me”). Where Smart diverged from Place was in the
explanation he gave for adopting the thesis that sensations are processes in
the brain. According to Smart (1959), “there is no conceivable experiment which
could decide between materialism and epiphenomenalism” (where the latter is
understood as a species of dualism); the statement “sensations are brain
processes,” therefore, is not a straight-out scientific hypothesis, but should
be adopted on other grounds. Occam’s razor is cited in support of the claim
that, even if the brain-process theory and dualism are equally consistent with
the (empirical) facts, the former has an edge in virtue of its simplicity and
explanatory utility.
Occam’s razor also plays a role in the version of
Mind-Brain Type Identity developed by Feigl (in fact, Smart claimed to have
been influenced by Feigl as well as by Place). On the epiphenomenalist picture,
in addition to the normal physical laws of cause and effect there are
psychophysical laws positing mental effects which do not by themselves function
as causes for any observable behavior. In Feigl’s view, such “nomological
danglers” have no place in a respectable ontology; thus, epiphenomenalism
(again considered as a species of dualism) should be rejected in favor of an
alternative, monistic theory of mind-body relations. Feigl’s suggestion was to
interpret the empirically ascertainable correlations between phenomenal
experiences (“raw feels,” see Consciousness and Qualia) and neurophysiological
processes in terms of contingent identity: although the terms we use to
identify them have different senses, their referents are one and the
same—namely, the immediately experienced qualities themselves. Besides
eliminating dangling causal laws, Feigl’s picture is intended to simplify our
conception of the world: “instead of conceiving of two realms, we have only one
reality which is represented in two different conceptual systems.”
In a number of early papers, and then at length in his
1968 book, A Materialist of the Mind, Armstrong worked out a version of
Mind-Brain Type Identity which starts from a somewhat different place than the
others. Adopting straight away the scientific view that humans are nothing more
than physico-chemical mechanisms, he declared that the task for philosophy is
to work out an account of the mind which is compatible with this view. Already
the seeds were sown for an Identity Theory which covers all of our mental
concepts, not merely those which fit but awkwardly on the Behaviorist picture.
Armstrong actually gave credit to the Behaviorists for logically connecting
internal mental states with external behavior; where they went wrong, he
argued, was in identifying the two realms. His own suggestion was that it makes
a lot more sense to define the mental not as behavior, but rather as the inner
causes of behavior. Thus, “we reach the conception of a mental state as a state
of the person apt for producing certain ranges of behavior.” Armstrong’s answer
to the remaining empirical question—what in fact is the intrinsic nature of
these (mental) causes?—was that they are physical states of the central nervous
system. The fact that Smart himself now holds that all mental states are brain
states (of course, the reverse need not be true), testifies to the influence of
Armstrong’s theory.
Besides the so-called “translation” versions of
Mind-Brain Type Identity advanced by Place, Smart, and Armstrong, according to
which our mental concepts are first supposed to be translated into
topic-neutral language, and the related version put forward by Feigl, there are
also “disappearance” (or “replacement”) versions. As initially outlined by Paul
Feyerabend (1963), this kind of Identity Theory actually favors doing away with
our present mental concepts. The primary motivation for such a radical proposal
is as follows: logically representing the identity relation between mental states
and physical states by means of biconditional “bridge laws” (e.g., something is
a pain if and only if it’s a c-fiber excitation) not only implies that mental
states have physical features; “it also seems to imply (if read from the right
to the left) that some physical events…have non-physical features.” In order to
avoid this apparent dualism of properties, Feyerabend stressed the
incompatibility of our mental concepts with empirical discoveries (including
projected ones), and proposed a redefinition of our existent mental terms.
Different philosophers took this proposal to imply different things. Some
advocated a wholesale scrapping of our ordinary language descriptions of mental
states, such that, down the road, people might develop a whole new (and vastly
more accurate) vocabulary to describe their own and others’ states of mind.
This begs the question, of course, what such a new-and-improved vocabulary
would look like. Others took a more theoretical/conservative line, arguing that
our familiar ways of describing mental states could in principle be replaced by
some very different (and again, vastly more accurate) set of terms and
concepts, but that these new terms and concepts would not—at least not
necessarily—be expected to become part of ordinary language. Responding to
Feyerabend, a number of philosophers expressed concern about the
appropriateness of classifying disappearance versions as theories of Mind-Brain
Type Identity. But Richard Rorty (1965) answered this concern, arguing that
there is nothing wrong with claiming that “what people now call ‘sensations’
are (identical with) certain brain processes.” In his Postscript to “The
‘Mental’ and the ‘Physical’,” Feigl (1967) confessed an attraction to this
version of the Identity Theory, and over the years Smart has moved in the same
direction.
2.
Traditional Objections
A number of objections to Mind-Brain Type Identity,
some a great deal stronger than others, began circulating soon after the
publication of Smart’s 1959 article. Perhaps the weakest were those of the
epistemological variety. It has been claimed, for example, that because people
have had (and still do have) knowledge of specific mental states while
remaining ignorant as to the physical states with which they are correlated,
the former could not possibly be identical with the latter. The obvious
response to this type of objection is to call attention to the contingent
nature of the proposed identities—of course we have different conceptions of
mental states and their correlated brain states, or no conception of the latter
at all, but that is just because (as Feigl made perfectly clear) the language
we use to describe them have different meanings. The contingency of mind-brain
identity relations also serves to answer the objection that since presently
accepted correlations may very well be empirically invalidated in the future,
mental states and brain states should not be viewed as identical.
A more serious objection to Mind-Brain Type Identity,
one that to this day has not been satisfactorily resolved, concerns various
non-intensional properties of mental states (on the one hand), and physical
states (on the other). After-images, for example, may be green or purple in
color, but nobody could reasonably claim that states of the brain are green or
purple. And conversely, while brain states may be spatially located with a fair
degree of accuracy, it has traditionally been assumed that mental states are
non-spatial. The problem generated by examples such as these is that they
appear to constitute violations of Leibniz’s Law, which states that if A is
identical with B, then A and B must be indiscernible in the sense of having in
common all of their (non-intensional) properties. We have already seen how
Place chose to respond to this type of objection, at least insofar as it
concerns conscious experiences—that is, by invoking the so-called
“phenomenological fallacy.” Smart’s response was to reiterate the point that
mental terms and physical terms have different meanings, while adding the
somewhat ambiguous remark that neither do they have the same logic. Lastly,
Smart claimed that if his hypothesis about sensations being brain processes
turns out to be correct, “we may easily adopt a convention…whereby it would
make sense to talk of an experience in terms appropriate to physical processes”
(the similarity to Feyerabend’s disappearance version of Mind-Brain Type
Identity should be apparent here). As for apparent discrepancies going in the
other direction (e.g., the spatiality of brain states vs. the non-spatiality of
mental states), Thomas Nagel in 1965 proposed a means of sidestepping any
objections by redefining the candidates for identity: “if the two sides of the
identity are not a sensation and a brain process but my having a certain
sensation or thought and my body’s being in a certain physical state, then they
will both be going on in the same place—namely, wherever I (and my body) happen
to be.” Suffice to say, opponents of Mind-Brain Type Identity found Nagel’s
suggestion unappealing.
The last traditional objection we shall look at
concerns the phenomenon of “first-person authority”; that is, the apparent
incorrigibility of introspective reports of thoughts and sensations. If I
report the occurrence of a pain in my leg, then (the story goes) I must have a
pain in my leg. Since the same cannot be said for reports of brain processes,
which are always open to question, it might look like we have here another
violation of Leibniz’s Law. But the real import of this discrepancy concerns
the purported correlations between mental states and brain states. What are we
to make of cases in which the report of a brain scientist contradicts the
introspective report, say, of someone claiming to be in pain? Is the brain
scientist always wrong? Smart’s initial response to Kurt Baier, who asked this
question in a 1962 article, was to deny the likelihood that such a state of
affairs would ever come about. But he also put forward another suggestion,
namely, that “not even sincere reports of immediate experience can be absolutely
incorrigible.” A lot of weight falls on the word “absolutely” here, for if the
incorrigibility of introspective reports is qualified too strongly, then, as
C.V. Borst noted in 1970, “it is somewhat difficult to see how the required
psycho-physical correlations could ever be set up at all.”
3.
Type vs. Token Identity
Something here needs to be said about the difference
between Type Identity and Token Identity, as this difference gets manifested in
the ontological commitments implicit in various Mind-Brain Identity theses.
Nagel was one of the first to distinguish between “general” and “particular”
identities in the context of the mind-body problem; this distinction was picked
up by Charles Taylor, who wrote in 1967 that “the failure of [general]
correlations…would still allow us to look for particular identities, holding
not between, say, a yellow after-image and a certain type of brain process in
general, but between a particular occurrence of this yellow after-image and a
particular occurrence of a brain process.” In contemporary parlance: when
asking whether mental things are the same as physical things, or distinct from
them, one must be clear as to whether the question applies to concrete
particulars (e.g., individual instances of pain occurring in particular
subjects at particular times) or to the kind (of state or event) under which
such concrete particulars fall.
Token Identity theories hold that every concrete
particular falling under a mental kind can be identified with some physical
(perhaps neurophysiological) happening or other: instances of pain, for example,
are taken to be not only instances of a mental state (e.g., pain), but
instances of some physical state as well (say, c-fiber excitation). Token
Identity is weaker than Type Identity, which goes so far as to claim that
mental kinds themselves are physical kinds. As Jerry Fodor pointed out in 1974,
Token Identity is entailed by, but does not entail, Type Identity. The former
is entailed by the latter because if mental kinds themselves are physical
kinds, then each individual instance of a mental kind will also be an
individual instance of a physical kind. The former does not entail the latter,
however, because even if a concrete particular falls under both a mental kind
and a physical kind, this contingent fact “does not guarantee the identity of
the kinds whose instantiation constitutes the concrete particulars.”
So the Identity Theory, taken as a theory of types
rather than tokens, must make some claim to the effect that mental states such
as pain (and not just individual instances of pain) are contingently identical
with—and therefore theoretically reducible to—physical states such as c-fiber
excitation. Depending on the desired strength and scope of mind-brain identity,
however, there are various ways of refining this claim.
4. Multiple Realizability
In “The Nature of Mental States,” (1967) Hilary Putnam
introduced what is widely considered the most damaging objection to theories of
Mind-Brain Type Identity—indeed, the objection which effectively retired such
theories from their privileged position in modern debates concerning the
relationship between mind and body.
Putnam’s argument can be paraphrased as follows: (1)
according to the Mind-Brain Type Identity theorist (at least post-Armstrong),
for every mental state there is a unique physical-chemical state of the brain
such that a life-form can be in that mental state if and only if it is in that
physical state. (2) It seems quite plausible to hold, as an empirical
hypothesis, that physically possible life-forms can be in the same mental state
without having brains in the same unique physical-chemical state. (3)
Therefore, it is highly unlikely that the Mind-Brain Type Identity theorist is
correct.
In support of the second premise above—the so-called
“multiple realizability” hypothesis—Putnam raised the following point: we have
good reason to suppose that somewhere in the universe—perhaps on earth, perhaps
only in scientific theory (or fiction)—there is a physically possible life-form
capable of being in mental state X (e.g., capable of feeling pain) without
being in physical-chemical brain state Y (that is, without being in the same
physical-chemical brain state correlated with pain in mammals). To follow just
one line of thought (advanced by Ned Block and Jerry Fodor in 1972), assuming
that the Darwinian doctrine of evolutionary convergence applies to psychology
as well as behavior, “psychological similarities across species may often
reflect convergent environmental selection rather than underlying physiological
similarities.” Other empirically verifiable phenomena, such as the plasticity
of the brain, also lend support to Putnam’s argument against Type Identity. It
is important to note, however, that Token Identity theories are fully
consistent with the multiple realizability of mental states.
5.
Attempts at Salvaging Type Identity
Since the publication of Putnam’s paper, a number of
philosophers have tried to save Mind-Brain Type Identity from the philosophical
scrapheap by making it fit somehow with the claim that the same mental states
are capable of being realized in a wide variety of life-forms and physical
structures. Two strategies in particular warrant examination here.
In a 1969 review of “The Nature of Mental States,”
David Lewis attacked Putnam for targeting his argument against a straw man. According
to Lewis, “a reasonable brain-state theorist would anticipate that pain might
well be one brain state in the case of men, and some other brain (or non-brain)
state in the case of mollusks. It might even be one brain state in the case of
Putnam, another in the case of Lewis.” But it is not so clear (in fact it is
doubtful) that Lewis’ appeal to “tacit relativity to context” will succeed in
rendering Type Identity compatible with the multiple realizability of mental
states. Although Putnam does not consider the possibility of species-specific
multiple realization resulting from such phenomena as injury compensation,
congenital defects, mutation, developmental plasticity, and, theoretically,
prosthetic brain surgery, neither does he say anything to rule them out. And
this is not surprising. As early as 1960, Identity theorists such as Stephen
Pepper were acknowledging the existence of species (even system)-specific
multiple realizability due to emergencies, accidents, injuries, and the like:
“it is not…necessary that the [psychophysical] correlation should be restricted
to areas of strict localization. One area of the brain could take over the
function of another area of the brain that has been injured.” Admittedly, some
of the phenomena listed above tell against Lewis’ objection more than others;
nevertheless, prima facie there seems no good reason to deny the possibility of
species-specific multiple realization.
In a desperate attempt at invalidating the conclusion
of Putnam’s argument, the brain-state theorist can undoubtedly come up with
additional restrictions to impose upon the first premise, e.g., with respect to
time. This is the strategy of David Braddon-Mitchell and Frank Jackson, who
wrote in a 1996 book that “there is…a better way to respond to the multiple
realizability point [than to advocate token identity]. It is to retain a
type-type mind-brain identity theory, but allow that that the identities
between mental types and brain types may—indeed, most likely will—need to be
restricted. Identity statements need to include an explicit temporal
restriction.” Mental states such as pain may not be identical with, say,
c-fiber excitation in humans (because of species-specific multiple
realization), but—the story goes—they could very well be identical with c-fiber
excitation in humans at time T. The danger in such an approach, besides its ad
hoc nature, is that the type physicalist basis from which the Identity Theorist
begins starts slipping into something closer to token physicalism (recall that concrete
particulars are individual instances occurring in particular subjects at
particular times). At the very least, Mind-Brain Type Identity will wind up so
weak as to be inadequate as an account of the nature of mental.
Another popular strategy for preserving Type Identity
in the face of multiple realization is to allow for the existence of
disjunctive physical kinds. By defining types of physical states in terms of
disjunctions of two or more physical “realizers,” the correlation of one such
realizer with a particular (type) mental state is sufficient. The search for
species- or system-specific identities is thereby rendered unnecessary, as
mental states such as pain could eventually be identified with the (potentially
infinite) disjunctive physical state of, say, c-fiber excitation (in humans),
d-fiber excitation (in mollusks), and e-network state (in a robot). In “The
Nature of Mental States,” Putnam dismisses the disjunctive strategy out of
hand, without saying why he thinks the physical-chemical brain states to be
posited in identity claims must be uniquely specifiable. Fodor (in 1974) and
Jaegwon Kim (1992), both former students of Putnam, tried coming to his rescue
by producing independent arguments which purport to show that disjunctions of
physical realizers cannot themselves be kinds. Whereas Fodor concluded that
“reductionism… flies in the face of the facts,” however, Kim concluded that
psychology is open to sundering “by being multiply locally reduced.”
Even if disjunctive physical kinds are allowed, it may
be argued that the strategy in question still cannot save Type Identity from
considerations of multiple realizability. Assume that all of the possible
physical realizers for some mental state M are represented by the ideal,
perhaps infinite, disjunctive physical state P; then it could never be the case
that a physically possible life-form is in M and not in P. Nevertheless, we
have good reason to think that some physically possible life-form could be in P
without being in M—maybe P in that life-form realizes some other mental state.
As Block and Fodor have argued, “it seems plausible that practically any type
of physical state could realize any type of psychological state in some
physical system or other.” The doctrine of “neurological equipotentiality”
advanced by renowned physiological psychologist Karl Lashley, according to
which given neural structures underlie a whole slew of psychological functions
depending upon the character of the activities engaged in, bears out this
hypothesis. The obvious way for the committed Identity theorist to deal with
this problem—by placing disjunctions of potentially infinite length on either
side of a biconditional sign—would render largely uninformative any so-called
“identity” claim. Just how uninformative depends on the size of the
disjunctions (the more disjuncts, the less informative). Infinitely long
disjunctions would render the identity claim completely uninformative. The only
thing an Identity Theory of this kind could tell us is that at least one of the
mental disjuncts is capable of being realized by at least one of the physical
disjuncts. Physicalism would survive, but barely, and in a distinctly
non-reductive form.
Recently, however, Ronald Endicott has presented
compelling considerations which tell against the above argument. There,
physical states are taken in isolation of their context. But it is only if the
context is varied that Block and Fodor’s remark will come out true. Otherwise,
mental states would not be determined by physical states, a situation which
contradicts the widely accepted (in contemporary philosophy of mind)
“supervenience principle”: no mental difference without a physical difference.
A defender of disjunctive physical kinds can thus claim that M is identical
with some ideal disjunction of complex physical properties like “C1 & P1,”
whose disjuncts are conjunctions of all the physical states (Ps) plus their
contexts (Cs) which give rise to M. So while “some physically possible
life-form could be in P without being in M,” no physically possible life-form
could be in C1 & P1 without being in M. Whether Endicott’s considerations
constitute a sufficient defense of the disjunctive strategy is still open to
debate. But one thing is clear—in the face of numerous and weighty objections,
Mind-Brain Type Identity (in one form or another) remains viable as a theory of
mind-body relations.
No comments:
Post a Comment