Knowledge
and true belief both tend to be things we want to have, but all else
being equal, we tend to prefer to have knowledge over mere true
belief. The Primary Value Problem is the problem of explaining why
that should be the case. Many epistemologists think that we should
take it as a criterion of adequacy for theories of knowledge that
they be able to explain the fact that we prefer knowledge to mere
true belief, or at least that they be consistent with a good
explanation of why that should be the case.
To
illustrate: suppose that Steve believes that the Yankees are a good
baseball team, because he thinks that their pinstriped uniforms are
so sharp-looking. Steve’s belief is true—the Yankees always field
a good team—but he holds his belief for such a terrible reason that
we are very reluctant to think of it as an item of knowledge.
Cases
like Steve’s motivate the view that knowledge consists of more than
just true belief. In order to count as knowledge, a belief has to be
well justified in some suitable sense, and it should also meet a
suitable Gettier-avoidance condition (see Gettier
Problems).
But not only do beliefs like Steve’s motivate the view that
knowledge consists of
more than mere true belief: they also motivate the view that
knowledge is better to
have than true belief. For suppose that Yolanda knows the Yankees’
stats, and on that basis she believes that the Yankees are a good
team. It seems that Yolanda’s belief counts as an item of
knowledge. And if we compare Steve and Yolanda, it seems that Yolanda
is doing better than
Steve; we’d prefer to be in Yolanda’s epistemic position rather
than in Steve’s. This seems to indicate that we prefer knowledge
over mere true belief.
The
challenge of the Primary Value Problem is to explain why that should
be the case. Why should we care about whether we have knowledge
instead of mere true belief? After all, as is often pointed out, true
beliefs seem to bring us the very same practical benefits as
knowledge. (Steve would do just as well as Yolanda betting on the
Yankees, for example.) Socrates makes this point in the Meno,
arguing that if someone wants to get to Larisa, and he has a true
belief but not knowledge about which road to take, then he will get
to Larisa just as surely as if he had knowledge of which road to
take. In response to Socrates’s argument, Meno is moved to wonder
why anyone should care about having knowledge instead of mere true
belief. (Hence, the Primary Value Problem is sometimes called the
Meno Problem.)
So
in short, the problem is that mere true beliefs seem to be just as
likely as knowledge to guide us well in our actions. But we still
seem to have the persistent intuition that any given item of
knowledge is more valuable than the corresponding item of mere true
belief. The challenge is to explain this intuition. Strategies for
addressing this problem can either try to show that knowledge really
is always more valuable than corresponding items of mere true belief,
or else they can allow that knowledge is sometimes (or even always)
no more valuable than mere true belief. If we adopt the latter kind
of response to the problem, it is incumbent on us to explain why we
should have the intuition that knowledge is more valuable than mere
true belief, in cases where it turns out that knowledge isn’t in
fact more valuable. Following Pritchard (2008; 2009), we can call
strategies of the first kind vindicating,
and we can call strategies of the second kind revisionary.
There
isn’t a received view among epistemologists about how we ought to
respond to the Primary Value Problem, so the most useful thing to do
at this point is to consider a number of the more interesting
proposals from the literature, and to look at their problems and
prospects.
i. Knowledge as Mere True Belief
A
very straightforward way to respond to the problem is to deny one of
the intuitions on which the problem depends, the intuition that
knowledge is distinct from true belief. Meno toys with this
idea in the Meno,
though Socrates disabuses him of the idea. (Somewhat more recently,
Sartwell (1991; 1992) has defended this approach to knowledge.) If
knowledge is identical with true belief, then we can simply reject
the value problem as resting on a mistaken view of knowledge. If
knowledge is true belief, then there’s no discrepancy in value to
explain.
The
view that knowledge is just true belief is almost universally
rejected, however, and with good reason. Cases where subjects have
true beliefs but lack knowledge are so easy to construct and so
intuitively obvious that identifying knowledge with true belief
represents an extreme departure from what most epistemologists and
laypeople think of knowledge. Consider once again Steve’s belief
that the Yankees are a good baseball team, which he holds because he
thinks their pinstriped uniforms are so sharp. It seems like an abuse
of language to call Steve’s belief an item of knowledge. At the
very least, we should be hesitant to accept such an extreme view
until we’ve exhausted all other theoretical options.
Of
course it could still be the case that knowledge is no more valuable
than mere true belief, even though knowledge is not identical with
true belief. But, as we’ve seen, there is a widespread and
resilient intuition that knowledge is more
valuable than mere true belief (recall, for instance, that we tend to
think that Yolanda’s epistemic state is better than
Steve’s). If knowledge were identical with true belief, then we
would have to take that intuition to be mistaken; but, since we can
see that knowledge is more than mere true belief, we can continue
looking for an acceptable account which would explain why knowledge
is more valuable than mere true belief.
ii. Stability
Most
attempts to explain why knowledge is more valuable than mere true
belief proceed by identifying some condition which must be added to
true belief in order to yield knowledge, and then explaining why that
further condition is valuable. Socrates’s own view, at least as
presented in the Meno,
is that knowledge is true opinion plus an account of why the opinion
is true (where the account of why it is true is itself already
present in the soul; it must only be recalled from
memory). So, Socrates proposes, a known true belief will be
more stable than
a mere true belief, because having an account of why a belief is true
helps to keep us from losing it. If you don’t have an account of
why a proposition is true, you might easily forget it, or abandon
your belief in it when you come across some reason for doubting it.
But if you do have an account of why a proposition is true, you
likely have a greater chance of remembering it, and if you come
across some reason for doubting it, you’ll have a reason available
to you for continuing to believe it.
A
worry for this solution is that it seems to be entirely possible for
a subject S to have some entirely unsupported beliefs, which do not
count as knowledge, but where S clings to these beliefs dogmatically,
even in the face of good counterevidence. S’s belief in a case like
this can be just as stable as many items of knowledge—indeed,
dogmatically held beliefs can even be more stable than knowledge. For
if you know that p,
then presumably your belief is a response to some sort of good reason
for believing that p.
But if your belief is a response to good reasons, then you’d likely
be inclined to revise your belief that p,
if you were to come across some good evidence for thinking that p is
false, or for thinking that you didn’t have any good reason for
believing that p in
the first place. On the other hand, if p is
something you cling to dogmatically (contrary evidence be damned),
then you’ll likely retain p even
when you get good reason for doubting it. So, even though having
stable true beliefs is no doubt a good thing, knowledge isn’t
always more stable than mere true belief, and an appeal to stability
does not seem to give us an adequate explanation of the extra value
of knowledge over mere true belief.
One
way to defend the stability response to the value problem is to hold
that knowledge is more stable than mere true beliefs, but only for
people whose cognitive faculties are in good working order, and to
deny that the cognitive faculties of people who cling dogmatically to
evidentially unsupported beliefs are in good working order
(Williamson 2000). This solution invites the objection, however, that
our cognitive faculties are not all geared to the production of true
beliefs. Some cognitive faculties are geared towards ensuring our
survival, and the outputs of these latter faculties might be held
very firmly even if they are not well supported by evidence. For
example, there could be subjects with cognitive mechanisms which take
as input sudden sounds and generate as output the belief that there’s
a predator nearby. Mechanisms like these might very well generate a
strong conviction that there’s a predator nearby. Such mechanisms
would likely yield many more false positive predator-identifications
than they would yield correct identifications, but their poor
true-to-false output-ratio doesn’t prevent mechanisms of this kind
from having a very high survival value, as long as they do correctly
identify predators when they are present. So it’s not really clear
that knowledge is more stable than mere true beliefs, even for mere
true beliefs which have been produced by cognitive systems which are
in good working order, because it’s possible for beliefs to be
evidentially unsupported, and very stable, and produced by properly
functioning cognitive faculties, all at the same time. (See Kvanvig
2003, ch1. for a critical discussion of Williamson’s appeal to
stability.)
iii. Virtues
Virtue
epistemologists are, roughly, those who think that knowledge is true
belief which is the product of intellectual virtues. They seem to
have a plausible solution to the Primary (and, as we’ll see, to the
Secondary) Value Problem.
According
to a prominent strand of virtue epistemology, knowledge is true
belief for which we give the subject credit (Greco 2003), or true
belief which is a cognitive success because of the subject’s
exercise of her relevant cognitive ability (Greco 2008; Sosa 2007).
For example (to adapt Sosa’s analogy): an archer, in firing at a
target, might shoot well or poorly. If she shoots poorly but hits the
target anyway (say, she takes aim very poorly but sneezes at the
moment of firing, and luckily happens to hit the target), her shot
doesn’t display skill, and her hitting the target doesn’t reflect
well on her. If she shoots well, on the other hand, then she might
hit the target or miss the target. If she shoots well and misses the
target, we will still credit her with having made a good shot,
because her shot manifests skill. If she shoots well and hits the
target, then we will credit her success to her having made a good
shot—unless there
were intervening factors which made it the case that the shot hit the
mark just as a matter of luck. For example: if a trickster moves the
target while the arrow is in mid-flight, but a sudden gust of wind
moves the arrow to the target’s new location, then in spite of the
fact that the archer makes a good shot, and she hits the target, she
doesn’t hit the target because she
made a good shot. She was just lucky, even though she was skillful.
But when strange factors don’t intervene, and the archer hits the
target because she made a good shot, we give her credit for having
hit the target, since we think that performances which succeed
because they are competent are the best kind of performances. And,
similarly, when it comes to belief-formation, we give people credit
for getting things right as a result of the exercise of their
intellectual virtues: we think it’s an achievement to get things
right as the result of one’s cognitive competence, and so we tend
to think that there’s a sense in which people who get things right
because of their intellectual competence deserve credit for getting
things right.
According
to another strand of virtue epistemology (Zagzebski 2003), we don’t
think of knowledge as true belief which meets some further condition.
Rather, we should think of knowledge as a state which a subject can
be in, which involves having the propositional attitude of belief,
but which also includes the motivations for which the subject has the
belief. Virtuous motivations might include things like diligence,
integrity, and a love of truth. And, just as we think that, in
ethics, virtuous motives make actions better (saving a drowning child
because you don’t want children to suffer and die is better than
saving a drowning child because you don’t want to have to give
testimony to the police, for example), we should also think that the
state of believing because of a virtuous motive is better than
believing for some other reason.
Some
concerns have been raised for both strands of virtue epistemology,
however. Briefly, a worry for the Sosa/Greco type of virtue
epistemology is that (as we’ll see in section 3) knowledge might
not after all in general be an achievement—it might be something we
can come by in a relatively easy or even lazy fashion. A worry for
Zagzebski’s type of virtue epistemology is that there seem to be
possible cases where subjects can acquire knowledge even though they
lack virtuous intellectual motives. Indeed, it seems possible to
acquire knowledge even if one has only the darkest of motives: if a
torturer is motivated by the desire to waterboard people until they
go insane, for example, he can thereby gain knowledge of how long it
takes to break a person by waterboarding.
Still,
the idea that knowledge can be analyzed as true belief which is
somehow virtuously produced and creditable to the agent seems to be
worth pursuing. Because the virtue-approach seems to be able to
handle most of the Gettier-style problems which plague previous
analyses of knowledge, and because it can provide what is on the face
of it a plausible solution to the Primary Value Problem, virtue
epistemology represents a promising research program, and its
problems and prospects deserve careful exploration.
iv. Reliabilism
The
Primary Value Problem is sometimes thought to be especially bad for
reliabilists about knowledge. Reliabilism in its simplest form is the
view that beliefs are justified if and only if they’re produced by
reliable processes, and they count as knowledge if and only if
they’re produced by reliable processes and they’re not Gettiered.
(See, for example, Goldman and Olsson (2009, p. 29 ) The apparent
trouble for reliabilism is that reliability only seems to be valuable
as a means to truth—so, in any given case where we have a true
belief, it’s not clear that the reliability of the process which
produced the belief is able to add anything to the value that the
belief already has in virtue of being true. The value which true9
beliefs have in virtue of being true completely “swamps” the
value of the reliability of their source, if reliability is only
valuable as a means to truth. (Hence the Primary Value Problem for
reliabilism has often been called the “swamping problem.”)
To
illustrate with an example borrowed from Zagzebski (2003): the value
of a cup of coffee seems to be a matter of how good the coffee
tastes. And we value reliable coffeemakers because we value good cups
of coffee. But when it comes to the value of any particular cup of
coffee, its value is just a matter of how good it tastes; whether the
coffee was produced by a reliable coffeemaker doesn’t add to or
detract from the value of the cup of coffee. Similarly, we value true
beliefs, and we value reliable belief-forming processes because we
care about getting true beliefs. So we have reason to prefer reliable
processes over unreliable ones. But whether a particular belief was
reliably or unreliably produced doesn’t seem to add to or detract
from the value of the belief itself.
Responses
have been offered on behalf of reliabilism. Brogaard (2006) points
out that critics of reliabilism seem to have been presupposing a
Moorean conception of value, according to which the value of an
object (or state, condition, and so forth) is entirely a function of
the internal properties of the object. (The value of the cup of
coffee is determined entirely by its internal properties, not by the
reliability of its production, or by the fineness of a particular
morning when you enjoy your coffee.) But this is a mistaken view
about value in general. External features can add value to objects.
We value a genuine Picasso painting more than a flawless counterfeit,
for example. If that’s correct, then extra value can be conferred
on an object, if it has a valuable source, and perhaps the value of
reliable processes can transfer to the beliefs which they produce.
(That
is a negative response to the value problem for reliabilism, in the
sense that its aim is to show that critics of reliabilism haven’t
shown that reliabilists can’t account for the value of knowledge.)
Goldman
and Olsson (2009) offer two further responses on behalf of
reliabilism. Their first response is that we can hold that true
belief is always valuable, and that reliability is only valuable as a
means to true belief, but that it is still more valuable to have
knowledge (understood as reliabilists understand knowledge, that is,
as reliably-produced and unGettiered true belief) than a mere true
belief. For if S knows that p in
circumstances C, then S has formed the belief that pthrough
some reliable process in C. So S has some reliable process available
to her, and it generated a belief in C. This makes it more likely
that S will have a reliable process available to her in future
similar circumstances, than it would be if S had an unreliably
produced true belief in C. So, when we’re thinking about how
valuable it is to be in circumstances C, it seems to be better for S
to be in C if S has knowledge in C than if she has mere true belief
in C, because having knowledge in C makes it likelier that she’ll
get more true beliefs in future similar circumstances.
This
response, Goldman and Olsson think, accounts for the extra value
which knowledge has in many cases. But there will still be cases
where S’s having knowledge in C doesn’t make it likelier that
she’ll get more true beliefs in the future. For example, C might be
a unique set of circumstances which is unlikely to come up again. Or
S might be employing a reliable process which is available to her in
C, but which is likely to become unavailable to her very soon. Or S
might be on her deathbed. So this response isn’t a completely
validating solution to the value problem, and it’s incumbent on
Goldman and Olsson to explain why we should tend to think that
knowledge is more valuable than mere true belief in those cases when
it’s not.
So
Goldman and Olsson offer a second response to the Primary Value
Problem: when it comes to our intuitions about the value of
knowledge, they argue, it’s plausible that these intuitions began
long ago with the recognition that true belief is always valuable in
some sense to have, and that knowledge is usually valuable because it
involves both true belief and the probability of getting more true
beliefs; and then, over time, we have come to simply think that
knowledge is valuable, even in cases when having knowledge doesn’t
make it more probable that the subject will get more true beliefs in
the future.
v. Contingent Features of Knowledge
An
approach similar to Goldman and Olsson’s is to consider the values
of contingent features of knowledge, rather than the value of its
necessary and/or sufficient conditions. Although we might think that
the natural way to account for the value of some state or condition
S1, which is composed of other states or conditions S2-Sn, is in
terms of the values of S2-Sn, perhaps S1 can be valuable in virtue of
some other conditions which typically (but not always) accompany S1,
or in terms of some valuable result which S1 is typically (but not
always) able to get us. For example: it’s normal to think that air
travel is valuable, because it typically enables people to cover
great distances safely and quickly. Of course, sometimes airplanes
are diverted, and slow travellers down, and sometimes airplanes
crash. But even so, we might continue to think, air travel is
typically a valuable thing, because in ordinary cases, it gets us
something good.
Similarly,
we might think that knowledge is valuable because we need to rely on
the information which people give us in order to accomplish just
about anything in this life, and being able to identify people as
having knowledge means being able to rely on them as informants. And
we also might think that there’s value in being able to track
whether our own beliefs are held on the basis of good reasons, and we
typically have good reasons available to us for believing p when
we know that p.
Of course we aren’t always in a position to identify when other
people have knowledge, and if externalists about knowledge are right,
then we don’t always have good reasons available to us when we have
knowledge ourselves. Nevertheless, we can typically identify
people as knowers, and we can typically identify
good reasons for the things we know. These things are valuable, so
they make typical cases of knowledge valuable, too. (See Craig (1990)
for an account of the value of knowledge in terms of the
characteristic function of knowledge-attribution. Jones (1997)
further develops the view.)
Like
Goldman and Olsson’s responses, this strategy for responding to the
value problem doesn’t give us an account of why knowledge is always
more valuable than mere true belief. For those who think that
knowledge is always preferable to mere true belief, and who therefore
seek a validating solution to the Primary Value Problem, this
strategy will not be satisfactory. But for those who are willing to
accept a somewhat revisionist response, according to which knowledge
is only usually or characteristically preferable to mere true belief,
this strategy seems promising.
b. The Secondary Value Problem
Suppose
you’ve applied for a new position in your company, but your boss
tells you that your co-worker Jones is going to get the job.
Frustrated, you glance over at Jones, and see that he has ten coins
on his desk, and you then watch him put the coins in his pocket. So
you form the belief that the person who will get the job has at least
ten coins in his or her pocket (call this belief “B”). But it
turns out that your boss was just toying with you; he just wanted to
see how you would react to bad news. He’s going to give you the
job. And it turns out that you also have at least ten coins in your
pocket.
So
you have a justified true belief, B, which has been Gettiered. In
cases like this, once you’ve found out that you were Gettiered,
it’s natural to react with annoyance or intellectual embarrassment:
even though you got things right (about the coins, though not about
who would get the job), and even though you had good reason to think
you had things right, you were just lucky in getting things right.
If
this is correct—if we do tend to prefer to have knowledge over
Gettiered justified true beliefs—then this suggests that there’s
a second value problem to be addressed. We seem to prefer having
knowledge over having any proper
subset of the parts of knowledge. But why should that be the case?
What value is added to justified true beliefs, when they meet a
suitable anti-Gettier condition?
i. No Extra Value
An
initial response is to deny that knowledge is more valuable than mere
justified true belief. If we’ve got true beliefs, and good reasons
for them, of course we might be Gettiered, if for some reason it
turns out that we’re just lucky in having true beliefs. When we
inquire into whether p,
we want to get to the truth regarding p,
and we want to do so in a rationally defensible way. If it turns out
that we get to the truth in a rationally defensible way, but strange
factors of the case undermine our claim to knowing the truth about p,
perhaps it just doesn’t matter that we don’t have knowledge.
Few
epistemologists have defended this view, however (though Kaplan
(1985) is an exception). We do after all find it irritating when we
find out that we’ve been Gettiered; and when we are considering
corresponding cases of knowledge and of Gettiered justified true
belief, we tend to think that the subject who has knowledge is better
off than the subject who is Gettiered. Of course we might be
mistaken; there might be nothing better in knowledge than in mere
justified true belief. But the presumption seems to be that knowledge
is more valuable, and we should try to explain why that is so.
Skepticism about the extra value of knowledge over mere justified
true belief might be acceptable if we fail to find an adequate
explanation, but we shouldn’t accept skepticism before searching
for a good explanation.
ii. Virtues
We
saw above that some virtue epistemologists think of knowledge in
terms of the achievement of true beliefs as a result of the exercise
of cognitive skills or virtues. And we do generally seem to value
success that results from our efforts and skills (that is, we value
success that’s been achievedrather
than stumbled into (for example, Sosa (2003; 2007) and Pritchard
(2009)). So, because we have a cognitive aim of getting to the truth,
and we can achieve that aim either as a result of luck or as a result
of our skillful cognitive performance, it seems that the value of
achieving our aims as a result of a skillful performance can help
explain why knowledge is more valuable than mere true belief.
That
line of thought works just as well as a response to the Secondary
Value Problem as to the Primary Value Problem. For in a Gettier case,
the subject has a justified true belief, but it’s just as a result
of luck that she arrived at a true belief rather than a false one. By
contrast, when a subject arrives at a true belief because she has
exercised a cognitive virtue, it’s plausible to think that it’s
not just lucky that she’s arrived at a true belief; she gets credit
for succeeding in the aim of getting to the truth as a result of her
skillful performance. So cases of knowledge do, but Gettier cases do
not, exemplify the value of succeeding in achieving our aims as a
result of a skillful performance.
No comments:
Post a Comment