This is the latest in my email correspondence with Professor Torin Alter, who specializes in the philosophy of mind.
I'm not going to respond to all of the issues raised in our previous exchanges. I would, for example, like to discuss the issue of operational definitions and your discomfort with my a priori approach to physicalism. But I'll put those topics off for a possible future time. For now, I've decided to construct a new argument about phenomenal knowledge which may help clear a path towards mutual understanding.
I checked out your paper, “Phenomenal Knowledge Without Experience,” as you suggested, and I would like to begin by quoting its opening sentences: “Phenomenal knowledge usually comes from experience. For example, I know what it’s like to see red because I have done so.”
The meaning of the phrase "phenomenal knowledge" is not obvious, which may be one reason why so many papers are written about it. One thing seems clear enough to me: An experience of red does not automatically translate to knowledge about that experience. (We can imagine countless scenarios in which we experience something and later alter our interpretation of that experience, and we can imagine countless scenarios in which our experiences never produce anything we would call "knowledge.") The question is, how do you get from experiencing red to knowing what it is like to see red? Mustn’t you first know that you have seen red?
If phenomenal knowledge (“P”) is about some experience (“X”), we must have some experiences (“Y”) which allow us to regard X as such. X is not sufficient for P, whether or not it is necessary for P. (From your paper, I take it that you are not convinced that X is necessary for P; however, this point does not seem relevant to my concern here.)
What are the conditions for Y here? In other words, what can ground P?
Y cannot be evidence of X, because then we would require some other experience (“Z”) to ground our knowledge of Y. Y must provide a ground for P in some other way, or else we would find ourselves with an infinite regress. Indeed, Y is not supposed to be a justification for ordinary knowledge; rather, Y is supposed to ground "phenomenal knowledge." It would seem the distinction between ordinary knowledge and phenomenal knowledge hinges on the nature of Y.
Consider the case of Mary again. When Mary leaves her black-and-white room, Mary's experiences of color vision do not automatically and immediately translate to phenomenal knowledge. She can see red without knowing that she has seen red. It might even take some time before she can clearly distinguish between various shades of different colors.
At what point does she say, “now I know what it is like to see red?” It must be some time after she has seen red, and after she has learned from some other experience that she had seen red. That is, she needs a Y-experience, and this must involve language. She needs to learn how to use the names of colors in a new way.
Mary could look at a book with colors and their names. Or she could be told the name of the colors. The point is, learning what it is like to see red amounts to learning how to use the word “red” to name various objects of experience. So, when she says, “now I know what it is like to see red,” she means, “now I can use the word 'red' to name objects of my experience.”
Of course, she has new experiences here. She experiences red directly, no longer limited by her monochromatic room. But this does not mean she has learned any facts about color vision.
To see this, imagine a different set of Y-experiences which ground Mary's phenomenal knowledge. Instead of looking up colors in a book or talking to people, Mary has a small device which reads wavelengths of light. (Mary could have been taught to use [or even build] this machine inside her black-and-white room.) Since Mary remembers everything she learned about color vision in her black-and-white room, she will remember how to translate the data from her machine into words, such as “red,” “green,” and “blue.” Thus, she can ground her phenomenal knowledge of color vision by associating her visual experience of color with the data from her machine.
Interestingly, while Mary could ground her phenomenal knowledge in this way, she could also use the machine without grounding any phenomenal knowledge at all. She could use the machine to identify specific colors, relying on her knowledge of wavelengths, but without bothering to notice any correspondence between the machine’s data and her visual experience of color. She could thus name colorful objects using the words “red” and “blue” as any normal person would (perhaps even with greater consistency), but only in response to the machine’s outputs. She does not have to use the machine to learn how to say such things as, “now I know what it is like to see red.”
We can imagine Mary never knowing what it is like to see red, even while accurately reporting instances of redness with her wavelength-detecting device, even though she has color experiences which she would be justified in calling "red," if she were so inclined to learn the rules of the language.
The point here is to emphasize that Mary’s visual experience of colors does not translate into phenomenal knowledge of color vision (if "phenomenal knowledge" is taken to be whatever leads a person to say, "now I know what it is like to see X"), nor does it improve her ability to understand color vision or make predictions about color vision.
Now, what happens if Mary does finally learn to associate her visual experiences with the names of colors? She will learn how to refer to colors without her machine. In this case, she will have learned a new skill: how to refer to the colors of objects without any technological prosthetics; but her knowledge of color vision will not have expanded. She will be able to say, “now I know what it is like to see red,” but that only means that now she does not need a prosthetic color detector to indicate redness. She can do it on her own.
I am tempted to conclude that phenomenal knowledge is the ability to name objects of experience in accordance with the rules of a language. And what Y-experience can ground this ability? It must be the experience of learning a language. More specifically, it is the experience of naming objects of experience.
In sum . . .
There is a parallel here between this scenario and the Lonesome Mary/Mary System scenario I described earlier, though I hadn't until now emphasized the importance of language in this thought experiment.
Everything Mary needs to gain phenomenal knowledge is available to her in the black-and-white room. When she gets out of the room, she gains phenomenal knowledge (whatever is implied by the claim that “I know what it is like to experience X”), but only when she learns how to name things on her own. She does not learn any new facts about what she is naming, nor does she learn any new names.
And we should not fall into the trap of saying that she does learn a new fact about color vision, like the fact that "seeing red is just like this," because what is expressed here is only the fact that Mary has learned to name something "red." That is not a fact about color vision, but a fact about her ability to use the language.
Mary cannot speak any more intelligently or knowledgably about color vision than she had previously been able to do. She therefore does not improve her understanding of color vision, and there is no sense in which she gains propositional knowledge (information) about color vision.
There is no sense in which any information about color vision was not discursively learnable here. If anything is not discursively learnable in Mary’s situation, it is the mechanisms which make language-learning possible. Such mechanisms do not constitute information (propositional knowledge) about color vision, but are those aspects of an organism which make propositional knowledge possible.
Thus, the case of Mary offers no grounds for claiming that scientific (physicalistic) descriptions of color vision are bound to leave out any information which is known to people who have color vision. The most we can say is that such descriptions do not automatically provide a person with phenomenal knowledge, because to gain phenomenal knowledge a person must make the effort to learn a particular use of the language. Phenomenal knowledge of colors is not information (propositional knowledge) about color vision at all. It is the ability to use discursively learnable information about color vision to refer to one’s own color vision.
I'm not sure how much of this new. I certainly hope you find it compelling. In any case, I look forward to your response.
Philosophy, Film, Politics, Etc.
Tuesday, March 17, 2009
This is the latest in my email correspondence with Professor Torin Alter, who specializes in the philosophy of mind.
Monday, March 9, 2009
The following is a slightly improved version of a post I recently contributed to the PhilPapers discussion forum. It is a response to Alvin Plantinga's "Evolutionary Argument Against Naturalism."
I do not know how much serious discussion among professional philosophers has been devoted to Plantinga's argument, though I know it is widely heralded by many non-professionals who do not like evolutionary theory.
Plantinga wants to use evolutionary theory to attack naturalism, but his argument fails on epistemological grounds. His error is two-fold. First, he fails to state his general epistemological position, and so leaves us wondering what he means by "truth." Second, and more detrimental to his argument, he fails to consider the possibility of epistemological behaviorism, which I take to be the most robust and compelling approach to epistemology (following the work of Peirce, Dewey, Wittgenstein, Quine, Davidson, and Rorty, to name but a few).
Consider any of Plantinga's examples of how one could survive perfectly well with a set of entirely false beliefs. One might, he says, run up a tree when confronted with a tiger, because one believed that this was the best way to pet the cute, furry animal. Thus, one's actions would lead to survival, but one would be acting on a false belief. Plantinga's point is that one could survive with a set of wholly false beliefs, and so we should not think that the possession of true beliefs is beneficial to survival.
The problem is, Plantinga makes it impossible to understand such a person's beliefs.
Let us leave aside the question of how a person could, in the course of their development, come to possess only false beliefs. (How does one only and in all cases learn to do things incorrectly? Can we imagine such a case?) This question gives us pause, but we might press forward more effectively by asking some different questions.
For example, under what conditions can we establish that a person believed one thing, and not another? What does it mean to say that the man believes the best way to pet a tiger is to run away from it? That is, in what conditions could we observe him and say, ah, yes, he believes that is how he should go about petting the tiger?
We might wonder here if the man speaks a language. If he does, then we could discuss his beliefs in terms of what he would say about his motivations, or what he says to himself about his motivations for behaving as he does. (This is how we talk about our own beliefs, after all.)
We ask the man, "Why are you running away from these tigers?" He might say, "Because I want to pet them!" How could we understand this response? We would think he was either mad, joking, or speaking a different language.
Of course, here we are assuming that the man would know what we meant by "tiger" and "running." That is, we would be assuming that the man had a relatively "true" set of beliefs, but only had a false belief about how he should behave with respect to the tiger. Yet, Plantinga's argument is that a person could have an entirely false set of beliefs. How could we communicate with such a person at all?
Let's say we supposed that the person was just speaking a different language. How could we go about translating that language? Perhaps we would be able to get to the point where we decided that, according to the person, "wanting to pet the tiger" meant "wanting to escape" or perhaps "wanting to run up the tree," or some such. In that case, we would not say the person's beliefs were false at all. For us to conclude that all of their beliefs were false, we would have to be able to interpret their beliefs as contradicting their behavior in all cases, and that precludes the possibility of any effective translation of their language into our own. Thus, such a person as Plantinga describes would always seem to be talking nonsense.
One might try to defend Plantinga's argument by defining beliefs as distinct from what a person can say about them. Beliefs, then, are not propositional attitudes, but perhaps attitudes towards some kind of mental state(s). Perhaps beliefs are attitudes towards mental images.
Imagine a person forming an image of what it would be like to pet a tiger, and then acting on the image by running up a tree and waiting for the tiger to go away. In what sense is that acting on the image?
A person without language can be said to be acting on a belief, in so far as they are acting according to a representation of the world. But in Plantinga's hypothetical scenario, how are any such images actual representations of the world? They cannot correspond with anything the person does, or else we would say they were "true" images.
Try to consider always acting on images which in no way corresponded with lived experience. A person could not learn (could not produce new images) based on one's experiences, or else some sort of correspondence would have to be at work. One could thus not respond effectively to novel situations. One could not even come to a rational understanding of their lived experience. Their experience itself would be a liability, as it would always seem to interfere with the images in their mind. Such a person would be living in a perpetual, debilitating state of cognitive dissonance.
We clearly have no way of understanding how a person could function as Plantinga describes, whether or not we stipulate that the person in question uses language or merely images.
This is a fatal flaw in Plantinga's argument, but it does not fully counter the intuition which Plantinga and his supporters maintain: namely, that evolutionary theory cannot account for truth. To uncover the full depth of Plantinga's error, we have to look at just how evolution could have selected for truth. That is, we must come to a more feasible notion of true belief.
And here it is quite simple: for we have no trouble understanding how evolution could select for the ability to develop and utilize various tools, and why not think that language is such a tool? If we think of language as a tool for furthering our successful reproduction, we can regard the category of "truth" as one aspect of that tool. Specifically, it is that aspect of the tool which turns language upon itself in a process of evaluation. "Truth" is what we call language when it works for certain ends.
Evolutionary theory does not say that our only end is reproduction; rather, it says that our ability to reproduce explains why we are the way we are. Our ends can thus be at least partially explained in terms of reproduction without being reduced to them. And so we can talk about truth without talking specifically about what will or won't lead to reproduction.
The main point here is that truth is a matter of what works, and that is all a matter of behavior. This is why we could not possibly understand the beliefs of somebody such as Plantinga describes, and why we should have no problem acknowledging our evolutionary origins whilst rejecting supernaturalism.