Philosophy, Film, Politics, Etc.

Thursday, October 29, 2009

Valid Inferences and Valid Arguments

I would like to distinguish between the form of a valid deduction and the validity of an argument. Formal logic deals with the forms of our inferences, and not the validity of our arguments. For example, appealing to the masses is not a valid form of argument, though it could be expressed as a valid syllogism. A valid argument must have a valid logical form; or, at least, it must be expressible in such a form. But having a valid logical form is not enough.

Admittedly, I haven't thought about this distinction before, and I would not be surprised if I suddenly reversed or qualified my position.

This might be better discussed by focusing on examples of logical fallacies.


Example 1: Begging the Question

1) If X, then ~~X
2) X
3) ~~X

This is begging the question by any account. Yet, it is a valid syllogism.


Example 2: Appeal to the Masses


This is also a logical fallacy, but it can be expressed as a valid syllogism:

1) If everybody knows that X, then X.
2) Everybody knows that X
3) X.

The form is valid. However, the argument is fallacious.


Analysis

One might try to reject the distinction I'm making by noting that the logical form, while valid, involves necessarily false premises. In other words, it might be argued that the above syllogisms are not sound. Yet, it seems to me that all of the premises in the above syllogisms are plausibly true in at least some cases. Indeed, to reject the antecedent in Example 1 would be to reject the possibility of any logical argument, since the antecedent could be anything. And to reject the antecedent in Example 2 would be to reject the possibility of common knowledge. It thus appears that the arguments cannot be said to have false premises.


It might also be argued that, in Example 2, the conditional encapsulates the logical fallacy in question, because whenever we say "everybody knows that X," we are appealing to the masses. Yet, it is not the case that, whenever we use this expression, we are making an argument. We might just be observing a commonly known fact. And, if everybody does know that X, then X is true. So, it is not clear that the conditional is false or invalid in Example 2.

Thus, the distinction I am drawing seems plausible, at least.

Any thoughts?

This post edited for clarity on January 28, 2010.

Thursday, September 24, 2009

Original Sin

Context: A woman calling herself "Mom" has made the following claim: Original Sin can only be understood from the heart, not the head. And understood it must be, she says, or else . . . well, I'm not sure how she would have me end that sentence. But apparently it's very important I understand Original Sin with my heart. The problem is, I can't get it past those darn censors in my head. Here's what I wrote:


God thinks I deserve to be punished for being born, but God is punishing Himself instead. Since God created me, God is responsible for my birth. It seems only right that God would punish Himself.

If God punishes Himself, it is because He wants to. He makes the rules, and he could give Himself a break. He could decide that nobody needs to be punished for sin. But He doesn't. He punishes Himself. That's His choice. God wants to suffer.

But let's consider this suffering. God punishes Himself by killing His son. Though that's not quite right. More specifically, God punishes Himself by letting His son be born--because once Jesus was born, he had to die. Jesus was at least part human, according to this story. And everybody dies.

Now, God created people to die. Does God suffer every time a person dies? Or did He only suffer when Jesus died?

If He only suffered when Jesus died . . . well, does that mean He loved Jesus more than me?

Maybe God suffers every time anybody dies. But God created everybody to die. So God did not specifically punish Himself by creating Jesus and letting Jesus die. God punishes Himself by letting every person be born. Jesus' life and death were not special, from God's perspective. Jesus' death was not a special sort of self-punishment for God. It was self-punishment as usual. (That is, unless God loves Jesus more than He loves me.)

Now, let's consider God's plan here. God says somebody has to be punished because people are born into sin. So He created a son, which apparently means Jason doesn't have to suffer. Yet, Jason suffers. Everybody suffers. So God's plan doesn't seem to be working so well.

This is the story, as I understand it: God has created people in His own image, and God has decided to punish Himself (by killing His son, who was the only person whose death meant anything to God, apparently), because people do X, Y, and Z. This punishment is supposed to save people from suffering, but people still suffer every day. So God's plan didn't work.

And why did God decide that, if people do X, Y, and Z, then somebody has to suffer? Why did God put these rules into play in the first place?

Was it a test? God wanted to see if people would do X, Y, and Z. So it was a bet, and God lost. He punished Himself for losing his bet. Now everybody on earth has to suffer, because God couldn't even figure out a way to get us off the hook. He tried by killing His son, but that didn't work.

I'm supposed to put all my faith in such a God, when so far all He's shown is incompetence?

Any intelligent being can make up games and decide to punish himself if such and such happens. And I'm in no position to judge. I've had plenty of failures of my own, so who's to expect God to be perfect?

The question is, why should I think this is anything but a weird story about a masochistic super-being who makes up arbitrary rules which lead him to punish himself and his own creation?

Why should I think this story has any meaning for my life?

I suppose your answer is: because it's God! It's True!

But that's no answer. I'm sure it speaks to your heart, but it doesn't seem the sort of thing minds are meant for.

Saturday, September 5, 2009

Stanley and Williamson on Ryle: "Knowing How"

[In July and August, 2010, I made some significant revisions and deleted some questionable portions of this post.]


In "Knowing How" (2001), Jason Stanley and Timothy Williamson (S&W) defend intellectualism against Gilbert Ryle. Their paper was selected by The Philosopher's Annual as one of the ten best papers of 2001. Yet, as I will argue, they profoundly misrepresent Ryle (and so fail to make a sound critique of his project). This suggests that there has been a widespread and severe misunderstanding of Ryle among academic philosophers.

Despite the problems with their response to Ryle, S&W's formulation of knowledge-how as a species of knowledge-that is a stand-alone argument and invites criticism of its own. As I aim to show, a clarification of some relevant issues makes it difficult to fully accept their analysis.

In section I, I present intellectualism. In section II, I correct S&W's misrepresentation of Ryle's argument against intellectualism. In section III, I point out an important difference between Ryle and S&W's conceptions of knowledge-that. In section IV, I defend the ability hypothesis against S&W's criticisms. In sections V, I make a case against S&W's analysis of knowledge-how.


I. The Intellectualist Legend

In chapter 2 of The Concept of Mind, Ryle argues that knowledge-how cannot be reduced to knowledge-that.* This argument is presented within the context of his critique of the "intellectualist legend." This is the view that intelligent behavior is always accompanied or preceded by a mental act, such as an act of theorizing. Intellectualism propagates the "myth" (to use Ryle's term) that what makes a behavior intelligent is its causal relation to a mental act which precedes or accompanies it.

Ryle summarizes his point (pp. 49-50):

"The central point that is being laboured in this chapter is of considerable importance. It is an attack from one flank upon the category-mistake which underlies the dogma of the ghost in the machine. In unconscious reliance upon this dogma theorists and laymen alike constantly construe the adjectives by which we characterise performances as ingenious, wise, methodical, careful, witty, etc. as signalising the occurence in someone's hidden stream of consciousness of special processes functioning as ghostly harbingers or more specifically as occult causes of the performances so characterised. They postulate an internal shadow-performance to be the real carrier of the intelligence ordinarily ascribed to the overt act, and think that in this way they explain what makes the overt act a manifestation of intelligence. They have described the overt act as an effect of a mental happening, though they stop short, of course, before raising the next question--what makes the postulated mental happenings manifestations of intelligence and not mental deficiency."

Ryle aims to show that intelligence is a matter of behavior, and that it does not always involve an antecedent act or process of intellection. He regards theorizing as the most exemplary form of intellection, which he describes as "one practice among others" (p. 26). The goal of theorizing, he says, is usually knowledge-that, which he regards as being in possession of facts. Knowledge-how, in contrast, is defined in terms of intelligent behavior.


II. The Reductio

S&W correctly identify the form of Ryle's argument against intellectualism, which is a Reductio, or "vicious regress." According to S&W, Ryle begins his Reductio by adopting the following two premises:

Premise 1: If one F's, one employs knowledge-how to F.
Premise 2: If one employs knowledge that p, one contemplates the proposition that p.

Yet, Ryle adopts neither of these premises.

Ryle never suggests anything like premise 1. Admittedly, S&W go on to explain that this premise must be qualified, because it would not make sense if F stood for something like digestion, for example. They say F must at least be restricted to intentional behavior. Curiously, they also say that Ryle "hints" that such a restriction is called for. Yet, Ryle hints at no such thing, because he never suggests this premise to begin with. Ryle states explicitly and repeatedly, from the very beginning of his discussion, that he is talking about intelligent behavior. He leaves no room for doubt. He is not talking about intentional behavior in general, nor is he talking about reflexive behaviors such as digestion.

More importantly, Ryle would not accept premise 2. Ryle does not commit to the position that employments of knowledge-that always entail contemplation. He claims that knowledge-that is the goal, not the condition, of theorizing. Still, even if we suppose that contemplation generally employs knowledge-that, contemplation is not the only, or even the most paradigmatic, example of employing knowledge-that. One may exhibit knowledge-that by publicly or privately stating propositions, such as when one recites the rules of chess. Knowledge-that entails acknowledgment of a proposition, and not necessarily contemplation.

Ryle clearly does not make the argument S-W suggest. In contrast to S&W's formulation, Ryle's Reductio can be better understood as follows. As noted earlier, Ryle arguess that acts of intellection, such as theorizing, are one species of intelligent behavior. It follows that, if intelligent behavior must be accompanied or preceded by some other act of intellection, then an infinite number of acts must occur to produce any intelligent behavior. To put it more succinctly, since intellection is itself a form of intelligent behavior, it cannot be regarded as a necessary antecedent to intelligent behavior without calling for an infinite regress.


III. Knowledge-That

Given Ryle's dispositionalism, and the way he clarifies the knowledge-how/knowledge-that distinction, it is clear that he regards knowledge-that in terms of abilities. For example, Ryle explains the knowledge-how/knowledge-that distinction with the example of learning how to play chess (p. 40-41): being able to recite the rules of chess is not the same as being able to play the game.** He also refers to knowledge-that with the phrase "propositional competence" (p. 49), a phrase which draws our attention directly towards abilities. While Ryle does regard knowledge-that as being in possession of facts, this should be regarded dispositionally, as a set of abilities to do with the use of a language.***

Yet, S&W introduce Ryle's distinction by claiming that knowledge-that "is not an ability, or anything similar." They attribute to Ryle a more nebulous, though perhaps more common, view of knowledge-that as "a relation between a thinker and a true proposition." Unfortunately, it is not clear what it means for a person to stand in a relation to a true proposition, and S&W offer no clarity on the matter.

In any case, after presenting a complex semantic and syntactic analysis, they conclude that statements about knowledge-how contain embedded questions the answers to which take the form of Russellian propositions, even if those propositions are only entertained "under a practical mode of presentation." In other words, if we can say that somebody knows how to do X, we are saying they know that w is a way to do X in a practical way. S&W conclude, knowledge-how entails propositional knowledge, or knowledge-that.

While S&W's formulation of knowledge-how poses no apparent threat to Ryle's Reductio against intellectualism, it does challenge his view that knowledge-how is logically prior to knowledge-that.

In the remained of this paper, I present a comparative analysis of how S&W's and Ryle's formulations of knowledge-how play out, both in general and in specific relation to the ability hypothesis. I begin by defending the ability hypothesis against S&W's objections.


IV. The Ability Hypothesis

In addition to critiquing Ryle and defending intellectualism, S&W use their analysis to defend Frank Jackson's knowledge argument against the ability hypothesis (hereafter "AH").

Briefly put, the knowledge argument claims that a scientist called Mary is able to learn every physical fact about color vision without ever living in the world of colorful objects, and that she learns some new facts about color when she finally sees colorful objects for the first time. The conclusion is that Mary's newly aquired facts are not physical facts, and so physicalism (the view that all of the physical facts are all of the facts) is false.

AH responds to this challenge by arguing that Mary gains new abilities--specifically, the abilities to recognize, remember, and imagine color experiences--and not new factual knowledge. This is sometimes expressed by saying that Mary gains know-how, and not propositional knowledge, or knowledge-that. (Considering the confusion over how to define terms such as "knowledge-how" and "knowledge-that," this may not be the best way to formulate AH.)

S&W argue that AH depends upon the mistaken belief that knowledge-how is distinct from knowledge-that. Yet, AH need not reject the view that Mary's new abilities are propositional knowledge "under a practical mode of presentation." It need only reject the claim that the facts involved are new to Mary. AH may thus stipulate that Mary knew the same facts while inside her black-and-white room, though under a different mode of presentation. Thus, even if S&W's formulation of knowledge-how is adopted, it does not force a rejection of AH. (See Yuri Cath, "The Ability Hypothesis and the New Knowledge-How" [2009].)

S&W also attempt to undermine AH by rejecting the claim that knowledge-how can be understood in terms of abilities. For example, they claim that a concert pianist who loses her arms may still be said to know how to play the piano, even though she has lost the ability.

There is something strange and uneasy about insisting that the pianist still knows how to play the piano but is just unable to do so. We cannot in full confidence say that she knows how to play the piano in the ways she once knew. Surely she still knows something of what it is like to play the piano; she can remember it in some respects; and she can probably use her toes or other devices to pick out a tune. She may also have retained the ability to instruct others in the art of performance. She has retained some abilities, but not others. There is no reason to conclude that she retains the know-how, but not any of the abilities. (See Laurence Nemirow, "So This Is What It's Like: A Defense Of The Ability Hypothesis" [2006]).****

The relevance of S&W's analysis to AH is independent of its relation to both Ryle and intellectualism. While S&W may not have successfully critiqued Ryle, they may still be said to have furthered our discussion of AH. However, it is not clear that they have done so in a way which is of decisive value. Even if one accepts S&W's formulation of knowledge-how, one may still retain AH. Yet, there are reasons for rejecting their formulation--reasons which may provide a clearer understanding of AH.


V. Knowledge-How

According to S&W, when we say that Hannah knows how to ride a bicycle, we mean that Hannah knows of some w that it is a way for her to ride a bicycle. We have little trouble specifying ways of riding a bicycle, and it is hard to imagine anybody knowing how to ride a bicycle without knowing of such ways that they are ways to ride a bicycle. We may thus be tempted to accept S&W's formulation. However, activities like bicycle riding are quite advanced and involve some degree of intellectual awareness. This is why we have little trouble describing and analyzing them. As Ryle says, "intellectual development is a condition of the existence of all but the most primitive occupations and interests" (p.317). For more primitive cases of knowledge-how, it is impossible to specify ways. This makes S&W's formulation far less appealing.

One way of approaching more primitive varieties of knowledge-how is to consider Ryle's distinction between achievement and task verbs. It appears that S&W's formulation of knowledge-how ignores this distinction. They take "Hannah knows how to ride a bicycle" as a paradigmatic case of knowledge-how attribution, even though "ride a bicycle" is a task, not an achievement. This profoundly limits the scope of their analysis.

Task and achievement verbs differ in their "logical force" (p. 150). Unlike task verbs, achievement verbs imply abilities which are not identified by indicating specific events or processes. Ryle further notes that achievement verbs differ from task verbs in so far as that they do not admit of the qualifications, "correctly" or "incorrectly." One can ride a bicycle correctly or incorrectly, but one cannot win a race correctly, or see the color red correctly.

Ryle says it is a category error to suppose that, when one observes the winning of a race, one observes two distinct events: the crossing of the finish line and the act of winning. Achievement verbs "do not stand for perplexingly undetectable actions or reactions, any more than 'win' stands for a perplexingly undetectable bit of running, or 'unlock' for an unreported bit of key-turning" (p. 152). What is observed is one's crossing of the finish line before the rest of the competitors, and this is interpreted as an act of winning. Thus, while "knowing how to cross the finish line" presumably entails knowledge of some way to cross the line, "knowing how to win" does not entail knowledge of any particular way of winning.


So with perception of the color red, what is observed is the act of identification, and--based on its relationship to other observations--this is interpreted as an act of seeing, remembering, recognizing, and/or imagining.
One cannot see, imagine, recognize, or remember redness correctly or incorrectly. One can, however, remember orange and mistakenly identify it as red. One can see red and mistakenly identify it as orange. One can correctly or incorrectly identify instances of redness.

Knowing what it is like to see the color red is a good candidate for knowing-how which is not based on intellection. In Ryle's terms, the verbs "see," "remember," "recognize," and "imagine" are achievement verbs, not task verbs. We have these abilities, and know that we have these abilities, but we cannot specify ways of performing these actions. While we can say somebody remembers, recognizes, or imagines red, we cannot indicate any way of doing so. We cannot even identify the way we do these things ourselves. Which is to say, nothing counts as a way of seeing red.

The ability to identify redness is a behavioral disposition, and it is implied when a person says, "I know what it is like to see red." However, saying "I know what it is like to see red" does not indicate anything it is like to see red at all. It does not pick out anything about a separate act of seeing as distinct from the act of identification. There is knowledge-how associated with "knowing what it is like to see red" which cannot be defined in terms of propositional competence, or knowledge of particular ways.

To take another example, Ryle notes that being able to tell jokes does not entail knowing that there is any particular way of telling jokes. When we say somebody knows how to tell jokes, we do not imply that there is any particular way of telling them. We do not imply that, when he tells a joke, he is employing any particular method which he applies to joke-telling in general. We could not indicate any such method if pressed, and neither could the person who knew how to tell jokes.

S&W claim that when we say a person knows how to tell a joke, we mean they know that some w is a way to tell a joke. This suggests that the person who knows how to tell jokes knows that one act of telling a joke is an example of telling jokes. But, then, in attributing knowledge-how to the person, we are claiming only that they know that one particular joke-telling exercise is a joke-telling exercise. Clearly this is not what we mean when we attribute knowledge-how. When we say a person knows how to tell a joke, we are attributing abilities which extend beyond any particular set of jokes.

One can know how to tell jokes well without there being any fact of the matter about what defines good joke-telling. Similarly, one can know what it is like to see red without there being anything one could identify as what it is like to see red. This counter-intuitive fact has made resolution of the knowledge argument rather difficult.


V
I. Conclusion

S&W approach a Rylean understanding of knowledge when they discuss Carl Ginet. Following Ginet, they note that "it is simply false that manifestations of knowledge-that must be accompanied by distinct actions of contemplating propositions." Ryle makes exactly this point, and it is why he rejects the intellectualist legend. Yet, S&W use Ginet's observation as a point against Ryle. This is explained by the fact that they have fundamentally misunderstood Ryle's project.

Intelligent behavior is not always dependent upon propositional competence. Contrary to S&W's formulation, knowledge-how does not always entail knowing that any particular w is a way to X.


See also:


Notes

* Stanley and Williamson reference another of Ryle's texts, originally published earlier though now available in a collection published in 1971. I unfortunately do not have this other text at my disposal. However, that text seems to be in line with what can be found in The Concept of Mind, and S-W do not draw attention to any significant differences between the two texts.

** If this is not clear, imagine a person who can recite all the rules of chess, but who demonstrates no strategic competence whatsoever. When they play, they follow the rules, but do not make any moves which might be regarded as intelligent. For example, they might only move their pawns. This sort of playing is what we would expect of somebody in the very beginning stages of the learning process; and, if such a student were asked if they could play chess, they would reasonably respond, "No, I'm just starting to learn."

*** We might suppose that knowledge-that is a species of knowledge-how, though Ryle does not put it this way. He regards knowledge-that as a peculiar sort of ability which does not easily translate into the language we use for knowledge-how. For example, he notes that we normally speak of a person having only slight knowledge of how to perform actions, but not of having slight knowledge that something is true.

**** Nemirow (2006) identifies Noam Chomsky and Torin Alter as supporters of this argument against AH. However, Alter has recently recanted his position on the matter. See Alter, "Phenomenal Knowledge Without Experience" (2008), footnote 14.

Tuesday, August 25, 2009

Logic and Reference

I want to better explain why I reject the idea that logic refers to something, such as abstractions or Platonic forms.

Words and sentences, of themselves, do not refer to anything. Rather, people can use words and sentences to refer to things. (This should be clear when we remember that the same words and sentences can refer to different things, depending on the context of utterance.) Furthermore, the meaning of a sentence is not always its referent; for we can understand sentences even when a referent is unspecified, and also in cases where the referent is non-existant. (E.g., "The King of France is bald.") From these points it follows, first, that the referent of a sentence depends on how it is used in a particular context; and, second, that sentences can be meaningful even if they have no known referent.

When we look at the meaning of a syllogism, we may easily find referents. For example,

  1. All men are mortal.
  2. Socrates is a man.
  3. Therefore, Socrates is mortal.

Taken by themselves, each of these sentences can be (and normally would be) used to refer to things, namely men, mortality, and Socrates. However, they can also be used to illustrate a certain logical form. The fact that there are referring terms is incidental. We could easily replace "Socrates" with "Alfred," and the logical validity would remain intact, even if nobody had any idea who (or what) Alfred might be.

The use of these sentences as an illustration of a valid syllogism is not their usual referring use. We can say that the meaning of each sentence, in this illustration, is based on their grammatical form, and not on what extra-linguistic entities they might be used to point to. We most clearly present rules of inference when we use symbols which have no referring use in our common language. We thus can say,

  1. All members of A are members of B.
  2. x is a member of A.
  3. Therefore, x is a member of B.

By taking out words like "men" and "Socrates," we help avoid the confusion of thinking that the meaning of our logical demonstration somehow depended on the referring use of our terms.

Of course, some students of logic might ask, "What does 'x' refer to?"

The correct answer is, nothing. The letter 'x' is a place-holder, and anything could be substituted for it. This does not mean that 'x' refers to anything, as though anything were something specific we could point to. And it doesn't mean that 'x' refers to some abstract category of 'logical thingness' or what have you.

Again, the meaning of these sentences, as they are used, is to demonstrate a form of logical deduction. That function does not depend on any references (except perhaps a reference to logic itself; but this reference is not found in any of the three sentences used in the syllogism). The very reason we use arbitrary symbols without referential meaning is to make this clear.

So, when people say that logic must refer to something, such as Platonic forms, not only is it not clear what they are getting at; it is not even clear why they feel the need to get anywhere.

Thursday, August 20, 2009

The Language of Consciousness

There is no good definition of "consciousness"--at least, not in any rigorous philosophical or scientific sense. There are just lots of ways we use the term in everyday life. For example, we use it to distinguish between sleep and wakefulness, or to indicate that we are focusing our attention on something, or that we remember something, or that we know something. These aren't all the same, or even necessarily similar, processes. So the idea that there is some unique thing called "consciousness" is perhaps an error. And so the idea that there are "conscious processes" in the brain is also perhaps an error.

The word "consciousness" does not pick out anything specific, but has meaning only in so far as it provides some structure to our discourse--specifically, our discourse about ourselves. It is a grammatical construction without extra-linguistic referent.* Once we've understood the language, we've understood consciousness. There is nothing left to understand. Thus, as Dennett says, there is nothing to understand about consciousness beyond verbal reports. (But this does not mean there is nothing else to understand about brains or behavior.)

If a person says "I am hungry," we know what they mean, because we've learned the language. And no investigation into their brain or stomach will explain the meaning any better to us. Of course, by looking at their brain and stomach we can get a better idea of why they've expressed that sentiment. But the meaning of the sentiment is no better understood by such an investigation.

Consider, if I say "John has malaria," you can understand me a little bit, even if you don't know who I am talking about. But if we are engaged in conversation, you will want to know who I'm talking about so that you can understand me fully. You assume that "John" refers to somebody specific.

With words for consciousness and feelings, we may similarly be tempted to look for hidden referents. Yet, this is a mistake. Not all nouns are names for things. In my understanding, the language of consciousness (notions of mind, thought, and feeling) is used to indirectly refer to unknown causes of behavior. The meaning of these terms is rooted in behavior, and yet it does not directly refer to the behavior itself, nor does it refer to any identifiable causes. They are floating signifiers, meaningful but without discernible referents.

With neuroscience, we can greatly improve our understanding of the brain and human behavior. But we won't understand consciousness any better, because there is nothing about consciousness hidden in the brain (or anywhere else). The word "consciousness" doesn't point to the brian. It doesn't point anywhere. It is not an extended finger, but more like a waving hand.

Consider an example. I am focusing on writing this post. That is a fact about my mind, right? Now, by studying the brain we can better understand how a human being goes about writing and thinking about philosophy. We can analyze the behavior. But we won't gain a better understanding of what it means to focus on writing something. We won't improve our understanding of that, because that is something we already understand by verbal report. The meaning of the expression "focusing on writing" is found in the act itself, in the behavior, which anybody can observe.

We can use verbal reports to guide our study of the brain, just as we can use any other behavioral cues. But in so doing, we are using the behavior to understand the brain, and not vice versa. It is only because we understand verbal reports that we can use them to analyze the brain. So how could analyzing the brain help us understand the reports any better?

Again, it can help us understand what caused them, but that does not help us understand what they mean.

* Edit: I should clarify this. Verbal reports, such as "I am hungry," are usually not references to observable behavior, but nor are they references to anything hidden behind observable behavior. They are not usually references at all. When I said that "consciousness" is a grammatical construction without extra-linguistic referent, I was ignoring the way we can use that term to analyze human behavior. The language of consciousness can be used to analyze behavior, and so "consciousness" can refer to things like wakefulness, readiness, and so on; however, the term "consciousness" is usually taken to mean something hidden behind those observable behaviors. This is what I reject. While the language of consciousness directly involves observable behavior, it is thought to indirectly point to something else. In my view, we couldn't possibly be pointing to anything else. So, words like "consciousness" can be taken to refer to observable behavior, but when we take them to refer to something else, such as a sense of self, or an "immediacy of experience," we are referring to linguistic constructions, and nothing outside of our discourse.

Friday, August 14, 2009

Mathematical Procedures and Incommensurability

I. Procedure and Representation

We can use numbers to perform calculations without having to stipulate that each number refers to anything outside of our mathematical operations. Number systems are tools for counting and performing other arithmetical functions. We can define arithmetic procedurally, and avoid wondering what sort of existence numbers might have on their own, perhaps in some Platonic realm. Numbers are symbols used to represent mathematical procedures.

I used to think that the existence of irrational numbers posed a problem for this view. To define rational numbers, we say they can be represented as a fraction between m and n (where m and n are not both divisible by two). Irrational numbers are defined as numbers which cannot be represented as fractions in this way. They seem to point to something beyond comprehension, beyond the possibility of finite containment.

Indeed, the fact is, we have symbols for irrational numbers; the numbers themselves are the referent. So in what sense can we say that irrational numbers are representations of procedures, if the numbers in question cannot be fully represented in any finite space?

This is a fundamental logical problem, or so it has been claimed.


II. Infinity and Impossible Objects

A long time ago I thought it was a good idea to define "infinity" as "the relationship between a discrete point and a continuous line." I don't know why that definition occurred to me, or why I liked it. Probably because of something to do with Zeno's paradoxes.

The sense is thus: A discrete point has no extension. A line has extension. Therefore, no matter how many discrete points you attempt to connect, you will never get a line. And no matter how many discrete points you attempt to extract from a line, you will never decrease its length. So, there is an infinite number of discrete points on any line, and a line is infinitely longer than any series of discrete points.

As a definition, this may not be very useful. But as a way of thinking about infinity, I find it very interesting. And implicit in this approach is an incommensurability; namely, that between extended and unextended objects.

Consider pi, which defines the relationship between a circle's radius and its circumference, as well as between its radius and its area. Pi is an irrational number. Why?

Well, one way to look at it is to consider how we define a circle as such. We define a circle as the set of all points on a linear plane equidistant to a single point. Yet, in this case, a circle is defined in terms of discrete points, which have no extension. A circle is a continuous line. So we have an incommensurability, and this indicates a geometrical impossibility. Pi is irrational because it is impossible to produce a perfect circle.

This is not a human limitation. It is a geometrical fact.

Yet, in some sense, mathematicians say that pi exists. The word "pi" represents something real, even if it cannot be calculated completely. We can calculate it to any arbitrary degree of accuracy. The question is, what does "accuracy" mean here? What do our calculations signify?

From the procedural point of view, the calculations signify steps in our attempt to generate a circle.

Consider how we might go about constructing a circle in the real world. We might try any number of ways. The irrationality of pi indicates that, no matter what method we use, there will always be a better way. It's not that we are getting closer and closer to the true value of pi. Rather, it is that we are getting closer and closer to a perfect circle, even though such an object cannot exist. We can approach impossible objects with infinite precision; we just cannot make them. So, pi is irrational. This is a geometrical fact about circles, and not about whatever might be trying to generate them.


III. Geometry and Arithmetic

Irrational numbers are found when we attempt to produce arithmetical models of impossible geometrical operations. Irrational numbers indicate a tension between geometry and arithmetic.

I recently explored this idea by trying to find a geometrical procedure for creating a single perfect cube out of the parts of three identical cubes. If it is possible to create such a cube, then the cube root of 3 must be calculatable. Since the cube root of 3 is irrational, I suppose that one cannot, even in theory, combine three cubes into a single one.

Today I thought of another example: the square root of five. Consider a square with sides length 2 meters. We would thus say the area of this square is 4 square meters. I suggest that it is theoretically impossible, by any means imaginable, to increase the size of such a cube by exactly twenty-five percent, arriving at a square with an area of 5 square meters.

Then I did a couple Web searches on irrational numbers, and I was happy to find this page, which supports my understanding. It says irrational numbers surface at the intersection of arithmetic and geometry, and that they indicate incommensurability. The author of that site (Laurence Spector, a math instructor at a community college in New York) claims here that there is a "fundamental logical problem" concerning the existence of irrational numbers. He suggests that the tension between arithmetic and geometry exists because it is impossible to name or measure every length.

I think there is something wrong with that explanation. Are we to take it that irrational numbers represent unmeasurable lengths?

Now, Spector notes that in order for irrational numbers to be considered numbers at all, we must have a procedure to name, or measure, them to any arbitrary degree. That is, to say that pi is a number, we must have some procedure which allows us to place it on the continuum of Real numbers. This doesn't mean we designate a specific spot for it; rather, it means that we can place it between two rational numbers--two numbers which we know how to measure.

But why does Spector conclude that the existence of irrational numbers means that not every length is measurable?

His assumption, apparently, is that irrational numbers represent lengths. This is the problem.

Numbers do not represent lengths. People may represent lengths by using numbers. (Similarly, words do not refer to things. People refer to things using words.)

Am I referring to a specific length when I say "the square root of two meters?" No, I don't think so. Not unless somebody told me that some particular length was the square root of two meters. But calling any length "the square root of two meters" can only be arbitrarily justified, and not according to any definitive standard of measurement. There is no way to determine that any length is equal to the square root of two meters, but we can always say it's close enough. This doesn't mean that any particular length is unmeasurable. It means "the square root of two meters" doesn't pick out a particular length at all.

When we produce a longer calculation of pi, we are not getting a closer approximation to the true relationship between a circle's radius and its circumference. Rather, we getting a more detailed standard of measurement.

More detailed. Not more precise or more accurate.

Since there is no end to the possible digits you can place to the right of the decimal point, there is no sense in claiming that you're ever getting closer to the end. The longer the calculation, the more information we have; but this doesn't make the information better in any purely mathematical sense. It is only better if you have some purpose, some use, for all of those numbers.


IV. Conclusion

I see no sense in talking about unknowable, unmeasurable, or infinitely improbable lengths. Lengths are defined in terms of operations, procedures. We get numbers like "the square root of two" because we are devising procedures which have no geometrical correlates. There is no procedure for producing a perfect circle in geometry. There is no procedure for increasing a square by twenty-five percent, or of combining three identical cubes into one. These are theoretical limitations, not practical obstacles.

When we say irrational numbers are real, we mean that our procedures for calculating them are valid. I have no issue with that point. The question is not whether or not the calculations are valid; the question is what they mean.

I don't know if Spector's interpretation is common among philosophers or mathematicians. However, it seems to me that the supposed "logical problem" is only a problem of interpretation, of thinking that numbers indicate anything other than the procedures we have for calculating them.

Wednesday, August 12, 2009

Summarizing Dennett on Consciousness

A few days ago I posted the following in a discussion at PhilPapers:

Not far into Consciousness Explained (paperback, p. 23), Dennett writes: "Today we talk about our conscious decisions and unconscious habits, about the conscious experience we enjoy (in contrast to, say, automatic cash machines, which have no such experiences) -- but we are no longer quite sure we know what we mean when we say these things. While there are still thinkers who gamely hold out for consciousness being some one genuine precious thing (like love, like gold), a thing that is just 'obvious' and very, very special, the suspicion is growing that this is an illusion. Perhaps the various phenomena that conspire to create the sense of a single mysterious phenomenon have no more ultimate or essential unity than the various phenomena that contribute to the sense that love is a simple thing."

I think understanding this passage is critical to understanding Dennett's approach. Our talk of consciousness is not necessarily always about the same thing. The word is sometimes used to refer to a sort of "inner monalogue." At other times, a focus of attention or an act of the imagination. It may yet refer to a strong feeling or a vague sensation. I am not suggesting all of these concepts are clearly defined, mind you. Yet, they are defined enough to enjoy currency in everyday life. They make sense, even if they lack philosophical rigour. The point is that there is a great deal of phenomena that produces this talk of consciousness; that there is much sense and value in such talk; and that, if we want to understand what people are talking about when they talk about consciousness, we must understand what is motivating, underlying, and otherwise producing the language.

The point is not to first define what single phenomenon or entity is behind all of these processes, as though we even had a clear idea of what all of these various processes or phenomena entailed. If we did, there would be nothing left to discover. Rather, the point is to try to understand what such talk is about; what is going on to produce and ultimately justify such notions as feeling and thought. In the end, we may find that the term "consciousness" is unnecessary to explain humanity. This does not mean consciousness will have been "explained away." It only means that the term "consciousness" has come to serve a variety of functions in the absence of a robust model of humanity, and that once our understanding of humanity improves, the term may not seduce us into thinking it is so important. (And please remember I am talking about the term here, and not anything which it might signify in any particular situation.)


------------------

Soon after that, I posted this:

Dennett does not present Consciousness Explained as an explanation of consciousness. He presents it as an attempt to clear away some of the confusions in various disciplines, including philosophy, cognitive science, and neuroscience, which he believes hinder our progress towards explaining humanity. Perhaps the title is pretentious. No doubt it was chosen as an attention-getter. I don't read it as a statement of victory, but as a statement of focus. Dennett wants to overcome various philosophical arguments (such as Chalmers' zombie argument) which attempt to make consciousness out to be something inexplicable, something which cannot be scientifically explained. Dennett wants to change the way we approach discussions of "consciousness" (whatever that word is taken to mean) by deflating the presuppositions which stand in the way of a full, scientific explanation of humanity.

Much of the book is critical. He discusses various ways philosophers and scientists get into trouble by assuming there is an underlying unity of consciousness. He also attempts to make a more constructive contribution to the study of humanity, producing a very rough, initial sketch of a Multiple Drafts model; but this is offered as little more than speculation, not as an answer but rather as a step towards changing the way we approach questions about human experience. He is asking us to stop assuming that there is some unitary and intuitively obvious thing called "consciousness." He is asking us to instead ask what various, complex processes might produce the illusion that there is some unified, intuitively obvious thing so many people are tempted to call "consciousness."


-------------

I was then asked to further explain my understanding of Dennett's approach. I just submitted the following, which will hopefully appear on the site in a few days:

I should note that I haven't read Dennett for a few years, and I only just glanced at Consciousness Explained to extract the quote I offered earlier. So I may not do him justice here, and I may exaggerate the places where his and my understanding meet. That said, here is an elaboration of my understanding of Dennett's approach to consciousness, since you asked.

As I understand it, his approach is to try to understand why people use the language as they do without presupposing anything about what would make such language true or false. He wants to understand why the notion of consciousness has a role in our language at all. I thus think he is very Wittgensteinian.

The first step is to refrain from assuming that there is anything to be explained beyond observable behavior. Second, Dennett takes claims about conscious states at face value. He calls this heterophenomenology, which I think is a version of "ordinary language" philosophy. He argues that, whatever consciousness is, there cannot be any facts about consciousness beyond what is expressed in verbal reports.

He argues that the language of consciousness is part of a general discursive strategy which he calls the intentional stance. That is, our conceptual framework for talking about consciousness--notions like want, love, expect, and so on--is not a representational model, but a predictive stragey for regulating our behavior. The language of intentionality is seen as a set of tools for dealing with the enormous complexity of human behavior, and not as a set of terms which correspond directly to any particular facts of existence.

Intentions, mental states, consciousness . . . According to Dennett, these concepts do not refer to specific processes or entities. For example, when I tell somebody, "I feel hungry," because I want to make plans to go have lunch, I may be indirectly talking about my digestive system or some processes in my brain, but there is not a particular fact about myself which corresponds to the words "I," "feel," or "hungry." Nor is there any fact about myself which corresponds to the sentence as a whole. Rather, the meaning of my utterance is defined by the situation. Saying "I am hungry" here is akin to playing a particular card in a game of bridge. It serves a function; it has meaning in the context of that game; but it does not refer to anything. (Wittgenstein made this same point when he said that verbal reports of pain do not refer to inner sensations, but simply replace crying.) When I say, "I am hungry," I am trying to move a social situation in a particular direction. I am not representing a fact about myself, as if such a fact could exist anywhere outside of the language-game.

Dennett postpones coming to any conclusions about what the term "consciousness" means. For example, he notes that when people talk about consciousness, they usually mean something which has a "point of view." He does not define "consciousness" as "having a point of view," but he notes that this is one of most popular ways the term is used. So he approaches an explanation of why people talk as if they had a point of view, ultimately regarding the notion of a point of view as a "theorist's fiction." He does not think that there is such a thing as a "point of view" which exists outside of our discourse; nor does he think there are some beings which just have consciousness or just have a discernible point of view, as if these were facts about bodies or minds which could be borne out of any investigation whatsoever. For Dennett, there is no fact of the matter here; there is no sense in questioning whether or not somebody really has a point of view, or really is conscious. Thus, for Dennett, the very notion of philosophical zombies is absurd. When we treat some things as having consciousness (or as having a point of view), we are employing an explanatory or predictive framework. We are not postulating facts which could be corroborated or falsified according to any scientific theory.

Dennett's concern is therefore not, "what beings have consciousness, and what beings don't?" Nor is it, "what constitutes consciousness? What is consciousness made of?" He does not say it is necessarily meaningless to ask these questions, however. He just doesn't want to presuppose that the term "consciousness" refers to anything. If somebody wants to define "consciousness" so that it has a particular, identifiable referent, then they can talk about whether or not and how it exists in any particular cases. But, according to Dennett, that is not how the language of consciousness has evolved.

Saturday, July 25, 2009

Karen Armstrong's "The Case for God"

Karen Armstrong is an advocate for NOMA, the principle which says that science and religion involve unique, non-overlapping domains of human interest. In Armstrong's view, each offers a distinct way of finding truth. Science comes from logos, the path of logic, reason, and evidence. Religion, on the other hand, comes from mythos, tapping into the mythological, emotional, intuitive path to wisdom. She believes that religion and science are both necessary for humanity, but that problems arise when either one attempts to invade the other's epistemological territory.

Anyone familiar with my views on religion will know that I find this point of view highly problematic. For one thing, it romanticizes both science and religion. More importantly, the distinction is without practical sense. It cannot be justified by appeal to either logos or mythos, and so any disputes cannot be resolved by common agreement.

Here are two clever responses to her latest book, The Case for God. Both articles are from The Guardian.

The first is a review by Simon Blackburn. I have only one point to add to Blackburn's insightful criticism: He forgets to point out the self-contradiction in Armstrong's perspective. She argues, using reason and evidence, that religion is about something which cannot be intellectualized by reason or evidence.

The second article is a humorously scathing "digested read" by John Crace.

I wonder how the responses in major American newspapers compare. Unfortunately, I expect they are a lot less critical, and a lot more empty-headed.

Wednesday, July 22, 2009

Induction and Scientific Reasoning

In this post I argue that enumerative, or "simple" induction (henceforth "induction")* does not play a significant role in scientific discovery. I construct this argument within a framework of epistemological behaviorism.


I. The Meaning and Value of Science

As I understand it, knowledge is another word for ability. Scientific knowledge is predictive ability, which is the ability to organize our behavior in accordance with the unfolding of nature.** In other words, science is the process of learning how to predict what is going to happen in new situations. Scientific knowledge is demonstrable in so far as the abilities it engenders are demonstrable and reliable.

Science is not the process of describing what has already happened, nor is it the process of describing what is happening at any given moment. Of course, science can help us understand what has already happened and what is happening right now. But the focus of science is always on the future, not on the present or the past.

Facts, not theories, are representations of the world. Theories are tools for constructing representations of the world. The theory/fact distinction does not refer to some distinction that exists beyond our discourse. The distinction between theories and facts helps us understand the relationship between knowledge and action. The meaning of a theory is in its ability to generate facts, and the meaning of a fact is determined by its relationship to our behavior. Knowledge, as such, does not exist as information stored in the brain, or on paper, but rather in the way information relates to behavior.

Scientific theories do not approximate extra-theoretic truth. There is truth in science in so far as science works; which only means that scientific practice engenders abilities to predict how things in the world will behave. In other words, scientific theories help us regulate our own behavior more effectively and in new ways. Truth is thus defined in terms of human behavior. When we say a scientific theory is true, it is quite like when we say of an accomplished archer that his aim is true. For example, quantum mechanics provides a formal method (or set of methods) for regulating our behavior in specific domains--domains which were not open to us before quantum mechanics was established.


II. Induction and Observation


Theoretical/experimental frameworks precede observations. There could be no observation of quantum entanglement before there was a theoretical framework that provided a mathematical model for it. The observations make sense because we have a way of understanding them.

Consider: There are observations in quantum mechanics we still cannot explain, such as what some call "wave/particle duality." When we say scientists observe duality, what we mean is that their observations cannot be explained by a single model, but indicate the need for two apparently incompatible models. Thus it is more accurate to say that we are not sure what scientists are observing, because we lack a unified theoretical framework to make sense of the behavior.

Could induction help us solve this problem? Can we resolve the issue of wave/particle duality by inductive means?

We might say that, because we consistently observe wave/particle duality, wave/particle duality is a fact of nature. Thus, we now have a scientific theory which says wave/particle duality is true. But all we have done here is moved from a situation in which we lacked a unified framework for interpreting our observations to a situation in which we claimed that our intitial, confused interpretation of our observations is true. What understanding would be gained by this move? How could it be called scientific?

If science were to proceed in this way--if this was how scientific theories were built--science would be a hopelessly circular enterprise. We would end up saying, "of course we observe wave/particle duality, because that is what our theory predicts!" This is to mistake an observation for a theory. Fortunately, science does not work this way.

A scientific theory does not simply say "wave/particle duality is true, because we have observed it to be true." Rather, scientific theories provide frameworks for making new predictions. They are not limited to what has already been observed. This is what makes them testable. It is how they open the horizons of human behavior in unexpected ways. This is where the value of science is had.

Inductive reasoning does not tell us anything beyond what we have already observed; it only attempts to account for our observations by defining them as instances of a general rule. But this is no accounting at all. It is not science.


III. Is there any role for induction in science at all?

There are two possible places to look for an answer to this question: in the production and in the testing of hypotheses.

First, consider whether induction could play an important role in coming up with testable hypotheses. Charles Sanders Peirce, the American pragmatist, claimed there is a different sort of reasoning which we use to come up with scientific hypotheses. He called it abductive reasoning. This is the process whereby we construct theories to account for facts which do not fit into our prevailing interpretive framework. Yet, it is not clear that this process follows any rules, methods, or principles. In any case, the suggestion that it depends upon induction is unwarranted. In fact, induction cannot play a pivotal role here.

If we observe a number of cases of X, but we have no explanatory framework for understanding X, how could induction help us produce a theory for explaining X? Induction only produces the claim that X is a rule or law of nature. This is not a scientific hypothesis, because it makes no novel predictions. Thus, in the formation of hypotheses, induction has nothing to offer.

Let us instead consider whether induction might play a role when we test hypotheses against observation. Are we using inductive reasoning to generalize from our observations? For example, the conservation of momentum has been supported by a great deal of experimental data. Does that mean that, after X number of trials, the meaning or relevance of the theory was finally established?

Clearly the meaning of the theory was not induced by the observations. The hypothesis preceded the experimental results. Was the relevance of the theory induced? Relevance to what, exactly?

We might say that, after a theory has been corroborated by a number of tests, its relevance to humanity has been established. But this is not induced. The relevance of the theory is deduced from the fact that the theory works.

Or, we might say that the relevance is not to humanity, but to the universe itself. But what relevance could a theory have to the universe, apart from the relevance it has to humanity?

Theories are relevant in so far as they engender ability. Their meaning and value is defined in terms of behavior, after all. So there is no sense in trying to define relevance to the universe as a whole, apart from the contexts in which the theories are used.

I think we can conclude that the meaning or relevance of a theory is never induced from observation.

It may be claimed that the meaning of the observations requires induction; that, in order to claim that the observations have some relevance beyond their particular time and place, we must use inductive reasoning. Yet, we should not forget that the interpretation of an observation already requires that it have meaning beyond the particular event it represents. For an observation to be regarded as such, it must make sense within some theoretical framework. Thus, we cannot claim that the meaning or relevance of an observation with respect to a theoretical framework requires induction without also claiming that the meaning of the theoretical framework requires induction. But this has already been shown to be incorrect. The meaning of a theoretical framework is not induced from observations or particulars.


IV. Conclusion

The search for a role for induction in scientific reasoning seems to lead us in circles. It might indicate a real problem for philosophers, if we had some reason to think that induction just has to play a role in scientific reasoning. Fortunately, we do not. There is no reason to think that induction plays a significant role in scientific reasoning. So I think we can let the matter drop.


Notes

* The Stanford Encyclopedia of Philosophy's entry on the problem of induction indicates that this usage of the term "induction" is somewhat antiquated, and that the term is now sometimes taken to refer to a large range of synthetic or contingent judgments, or perhaps even to all contingent judgments. This seems to confuse the language and makes it impossible to identify what the term "induction" is supposed to mean. In any case, the so-called "problem of induction" is quite specifically about enumerative induction, and it is with respect to this historical debate that I am writing. I see little reason to extend the term "induction" to other cases. In fact, I suspect that the confusion surrounding contemporary uses of the term indicate conceptual confusions within philosophy; I thus would prefer sticking to the traditional usage, if only as a lifeline when trying to navigate those murky waters, until I find a compelling reason to abandon it.

** In retrospect, I would make an important distinction here. Predictive abilities are not the only abilities which organize our behavior in accordance with the unfolding of nature. There are more basic anticipatory abilities. Prediction requires language. More p
rimitive forms of anticipation require less sophisticated forms of representation. [This post was edited to include this footnote.]

Monday, July 20, 2009

A Brief History Of The Philosophy Of Science [Revised Edition]

A friend of mine recently gave me some advice: Don't let your philosophical pursuits get side-tracked by the atheism-vs.-religion debate. I pointed out that the history of modern science and modern philosophy is inextricably tied to the debate between atheism and religion. The philosophy of science has, since Descartes and Bacon, developed in explicit reaction to religious practice. The pursuit of scientific foundations has partly been the pursuit of intellectual liberation from religious dogma.

Around 1600, many rapid advancements in philosophy and science began to change the way people understood themselves and their relationship to the world. For example, Copernicus challenged conceptions of humanity by suggesting that the earth was not at the center of the universe. Galileo challenged conceptions of nature by reducing it to mathematical terms. The philosophy of science became a central issue in intellectual life.

There is a widespread misconception that science itself began during this relatively recent period in history. Of course, the most advanced, formal areas of science, such as physics and chemistry, started to take their current forms during the Renaissance. However, science, as the process of formalizing discovery, is much, much older. It probably began when the knowledge required to produce fire and tools was first communicated within communities. The Renaissance did not see the beginning of science; rather, it saw the beginning of an effort to define science as a distinct enterprise from theology.

Having seen what happened to Galileo and Bruno, Renaissance philosophers and scientists knew that, if they were going to challenge church teachings, they had better be sure their arguments rested on firm foundations. It was thus thought necessary to find ultimate principles or foundations for scientific knowledge.

Rene Descartes proposed the method of hyperbolic doubt to establish foundations of knowledge which were thought to exist independently of, but in harmony with, theology. Francis Bacon proposed the method of induction, which Sir Isaac Newton later incorporated into his Principia Mathematica.

Earlier, in the Middle Ages, theology and philosophy were one and the same. People like William of Ockam and Roger Bacon had made recognizable contributions to the philosophy of science, though we call it that only in retrospect. Of course, the Renaissance and modern era also produced great scientists who maintained strong intellectual ties to theology, including Newton. Yet, even Newton was intent on establishing principles of science which did not depend on theology or church doctrine for their validity.

Historically speaking, it is clear that modern philosophy has been heavily marked by its relationship to theology and religion. Modern science emerged as a competing form of authority, and its legitimacy was thought to be determined by the validity of its necessary principles.

This view of science persists today. Yet, ever since David Hume published his A Treatise of Human Nature in 1739, there has been good reason to doubt that absolute foundations for science can be had. Hume demonstrated that judgments about nature and cause-effect relationships do not inevitably follow from reason or observation, but are only so many habits of thought.

In the early 20th Century, the logical positivists tried to overcome this problem (known as the problem of induction) by defining religious authority out of existence. Meaning, they said, could only be scientific, because meaning is defined in terms of verification. Unlike scientific theories, religious proclamations are not verifiable. However, this argument seems to be self-defeating; for how could one verify that all meaning is defined in terms of verification?

Sir Karl Popper rejected logical positivism, promoting falsificationism as the basic philosophical principle of scientific discovery. Unlike religion, scientific theories have predictive value. That is, they can be demonstrated to be false. Their meaning is not dependent on their conditions for verification, as the logical positivists thought. Rather, their status is dependent on their conditions for falsification. Scientists do not verify theories by finding X number of examples. They thus do not depend on induction. Rather, scientists deduce predictions from theories, and test the theories accordingly. Popper thus avoided the problem of induction and the circularity of verificationism. He was able to distinguish scientific authority, not according to any indubitable foundations, but according to its ability to produce testable hypotheses about the world.

Popper's criterion remains popular among scientists today, and a common reason for rejecting religious authority is its lack of falsifiability. Yet, falsificationism has been criticized in the philosophy of science. For example, it has been observed that science does not always proceed by clear-cut cases of falsification. When data contradicts theory, ad hoc adjustments are often made. Also, problematic data is sometimes ignored for any number of reasons: it may be dismissed as a statistical outlier, or it is thought to be the result of human error or faulty equipment. While falsificationism helps us understand what distinguishes science from religion in theory, it does not explain how science always proceeds in the real world.

According to Thomas Kuhn, scientific advancement is most dramatically made when new paradigms change the way we interpret the data. Scientific theories are not abandoned because of falsifying data. Rather, they are replaced by new ways of thinking which change the way that data is understood. This view is not necessarily opposed to falsificationism, however. We might say that paradigm shifts are justifiable precisely because new paradigms have more predictive value.

Then Paul Feyerabend argued that science does not advance according to any fundamental rules or methods at all. In fact, he said, what people think of as "science" is a myth. He thus advocated an anarchistic, "anything goes" approach to science.

Finally, neo-pragmatists like Richard Rorty (following Wittgenstein) said there was no such thing as scientific advancement at all; at least, not if we think of scientific progress in terms of getting closer and closer to some absolute Truth-with-a-capital-"T." New scientific theories are just new ways of talking. There is no absolute, extra-theoretical standard to judge, for example, whether Einstein's theory of gravity is better than Aristotle's. Einstein and Aristotle spoke different languages, and we adopt yet another language when we compare them. Thus, there are no absolute foundations. No indubitable standards for measuring progress. No all-encompassing methods or principles. There is only language and how we use it. As Wittgenstein said, language is a tool with innumerable functions, and its meaning is how it is used.

While this may seem to undermine the validity of science, the opposite is the case. It does not undermine any supposed foundations which scientists depend upon for their work. On the contrary, it undermines the motivation for looking for ultimate foundations. This proves to be as much of a liberation from religious dogma as anyone could want; for the lack of foundations here applies as much to religion as it does to science.

The "modern" sciences have become dominant forces in the world, largely thanks to political as well as scientific advancements. Their formal tools and methods have produced previously unimaginable developments, ever broadening the scope of human understanding. Scientists rarely feel any pressure to justify their practices, no longer feeling the need to appeal to ultimate foundations. The tangible results of the process are where justification is found. Meanwhile, religion is struggling to maintain its status as a legitimate form of authority.

Science and religion are still competing forms of authority. Many people like me, not all of them self-described atheists, reject religious authority as unfounded and regard every aspect of our existence as being open to discovery. We are derided as followers of "scientism" or "naturalism." Our detractors claim that naturalism requires faith in the methods and principles of science. They say religion is better than science when it comes to informing our lives, because it rests on the firm, absolute foundation of God's will. They say science cannot answer important questions, that science cannot explain important aspects of our lives, such as the very desire to search for meaning in the universe.

These objections are all of them unfounded. The appeal to God's will has no discernible meaning. The demand for foundations is itself unfounded. There is therefore no basis for excluding any aspect of experience from science's grasp. We cannot decide ahead of time what is or is not open to formal discovery. Science is dynamic, disparate, and evolving. Perhaps its most salient feature is its theoretical unpredictability.

Of course, science does have recognizable general features. There are standards that scientists generally adhere to, such as logical consistency, Ockam's Razor, and Popper's criterion of falsifiability. However, these are not absolute foundations. They do not establish the ultimate legitimacy or authority of any science. Rather, they describe aspects of formal discovery. Consistency, parsimony, and falsifiability are guiding principles scientists try to follow. Ockam's razor and logic help maintain the integrity of formal discovery, while falsificationism reminds scientists that success depends on the value their theories can demonstrate.

The point is, scientists do not have faith that any particular methods or principles work. Rather, scientists simply develop and use new methods and principles because they produce tangible results. Scientific progress is made when new methods of discovery are adopted, methods which replace or expand upon the methods previously used. Scientific theories are adopted, not believed. When we say that a scientific theory is "true," we only mean that it works. We do not believe in science as a matter of faith. We use it as a matter of practice.

Scientific knowledge is not just a collection of theories; it is the practical abilities engendered by theoretical frameworks. The meaning and value of science is determined by the results of its practice, and not in the search for its foundations. There is no need to look for absolute foundations, and there is no reason to wonder why none were ever found. The outcome of scientific discovery is its continuously developing foundation.

The burden of proof is not on science to justify itself. In fact, there is no single belief system or institution called "science." There is thus nothing to justify as such. There are many scientific methods and institutions, and there will be many more, and we have every right to call them into question. However, if we do so, we must have a reason. What reason do critics of science have for questioning its practices? How do they justify their authority to question science?

Appealing to God no longer gets the job done.

The success of the Enlightenment was not in establishing firm foundations for scientific knowledge. It was in revealing the lack of absolute foundations for knowledge in general, including theistic knowledge. Modernity's greatest triumph was not just in furthering the sciences, but in freeing discovery from the restrictions of religious authority. That is progress.

Saturday, July 18, 2009

Some Thoughts On Ockam's Razor and Induction

A blogger name John Pieret has also criticized Sean Carroll's article about naturalism, though I think his criticisms are somewhat misguided. I just left the following comment on his blog:

I have major problems with Carroll's treatment of naturalism and supernaturalism. (See here: Discovery, Demonstration, and Naturalism.) However, I don't agree with all of your points; specifically about Ockam's Razor and the so-called "problem of induction."

Ockam's Razor is an indispensable explanatory tool. Consider the situation with ID again.

IDers might claim that ID is simpler than natural evolution, that Ockam's Razor weighs in their favor. The question is, are they right?

The answer is: of course not. Natural evolution does not postulate any entities beyond our explanatory framework, and it does not postulate anything superfluous. It does not postulate entities beyond necessity.

ID, on the other hand, postulates an "intelligent designer" which is outside of our explanatory framework, which is not (and, some IDers would say, cannot be) explained. ID does not explain how the "designer" has done anything. It does not explain anything.

Ockam's Razor is not the principle of least effort. If it were, then any predictive theory would lose against hand-waving. No, the razor does not favor the argument which requires the least amount of work. Rather, it says, the best explanation is the one that does not postulate unnecessary entities. (Necessity is recognizable by comparing two competing theories.) Clearly the razor favors natural evolution, and nothing "supernatural."

Okasha's argument [that Ockam's Razor suggests that nature is simpler than it might really be] is not convincing, because the razor does not require us to claim that nature is "not complex." Rather, it requires us to recognize the uselessness of postulating unnecessary entities. Our theories should be as complex as necessary to make good predictions, and no more. This does not require making any assumptions about how simple or complex nature really is.

As far as the problem of induction and the Alpha Centauri example: I think Carroll did misrepresent the scientific view there. It is not that scientists claim outright, "of course momentum was conserved!" Rather, they predict that momentum is conserved, and maintain that attitude unless given strong enough reason to change it.

There is no problem of induction in science, however, because predictions are not induced by finite sets of examples. Rather, they are deduced from theoretical frameworks which define examples as such.

Friday, July 17, 2009

Discovery, Demonstration, and Naturalism

Over the past several months or so, I've approached discussions of science by focusing on the concept of discovery. Science is the formalization of discovery. There are as many scientific methods as there are formal methods for discovering phenomena. Accidental discoveries can be utilized by science, in so far as they can lead to formal methods of discovery.

It occurs to me that this view of science, while essentially valid, might be easier to communicate if I adopt another term: demonstration. Science deals in what is demonstrable. A method of discovery is a set of rules or procedures for demonstrating facts. A scientific method is defined by its rules and procedures for demonstrating facts about the world.

We might ask, what is being discovered here? Is it the rules, or is it the facts?

The answer is: both. A scientific discovery is defined in terms of the facts discovered as well as the rules/procedures for demonstrating those facts. It is the relationship between rules and facts which defines a scientific theory as such.

This leaves the question: What is the purview of scientific discovery? Are there some things which cannot be discovered/demonstrated, but which act as guiding forces in our lives?

Sean Carroll approaches this question in a recent contribution to Discover Magazine entitiled "What Questions Can Science Answer?" Carroll does a decent job of describing the way science approaches questions of fact. He also points out the need to clearly define our terms. Yet, while he says the term "science" is well enough understood, he offers a problematic definition of "natural."

In what is essentially a definition of "naturalism," he writes: "By 'natural' I simply mean the view in which everything that happens can be explained in terms of a physical world obeying unambiguous rules, never disturbed by whimsical supernatural interventions from outside nature itself."

Scientists do not rely on any notion of the supernatural to formulate their conception of nature. So why include the word "supernatural" in the definition of "natural?"

Also, it is not clear that the world "obeys unambiguous rules." It is not even clear what that means. Perhaps Carroll means the world acts in a manner consistent with the predictions made by unambiguous rules. If we remove the part about the supernatural and clarify the reference to rules, we end up defining "nature" as that which can be methodically demonstrated. Nature is what is open to formal discovery, by definition.

Naturalism is not a scientific hypothesis which might eventually be falsified. It is not a conclusion based on scientific evidence. Naturalism is true by definition. It is a framework for talking about discovery and demonstration. It is a language for understanding the relationship between knowledge and action. The words "nature" and "science" go together like "bachelor" and "unmarried."

This makes it easier to correct a misconception regarding quantum mechanics. In quantum mechanics, there are events or relationships which are somewhat unpredictable, or "whimsical." This is not to say they suggest a "supernatural" or any other kind of intervention, though. The notion of intervention implies directedness, and there is no evidence that quantum unpredictability is somehow directed towards any ends. The whole point is that it lacks direction.

Quantum unpredictability does not undermine a naturalistic view of the world. It does not undermine the meaning or value of science. In fact, quantum unpredictability is quantified scientifically. Scientific theory predicts a certain degree of unpredictability, and measurements of that unpredictability can be tested against scientific theory. The so-called "whimsical" aspects of quantum mechanics might defy common sense, but they do not defy scientific practice.

The point to stress here is that, if there were some intent or direction behind these events, some force which guided the course of nature, it would be scientifically discoverable. It would be worthy of the term "nature."

Carroll's argument misses this important point. He takes the terms "natural" and "supernatural" to indicate equally coherent, competing theories of the world. He says: "The preference for a natural explanation is not an a priori assumption made by science; it’s a conclusion of the scientific method. We know enough about the workings of the world to compare two competing big-picture theoretical frameworks: a purely naturalistic one, versus one that incorporates some sort of supernatural component."

Carroll's view is wrong, and in a dangerous way.

The term "naturalism" has no place in science. It is a political rejoinder against supernaturalists, and nothing more. Supernaturalists wish to protect their beliefs and institutions from scientific scrutiny, simultaneously promoting the contradictory claims that they are beyond science's purview and that science must change to allow for their beliefs. Yet, the term "supernatural" remains incoherent.

Supernaturalism is not a philosophical or scientific point of view. It is a political strategy to corrupt our understanding of the relationship between knowledge and action, and our understanding of science and nature should not be compromised by it. When Carroll says naturalism offers "a more compelling fit to the observations," he is playing into a political trap. He is opening the door to a debate about how well naturalism is supported by the evidence; as though naturalism were defined in testable terms. At best, this will lead to confusion. At worst, it will fan the flames of ignorance about the value and meaning of science.

Thursday, July 9, 2009

More on the supposed paradox of identity

In my last post, I suggested there might be a paradox regarding personal identity. Specifically, it seems that our ability to refer to ourselves is undeniable, and yet we cannot specify what, exactly, we are referring to. Not even a purely abstract mind-stuff would seem to get the job done. So, what are we talking about, when we talk about personal identity?

If there is a paradox here, it is probably the same as the classic Ship of Theseus paradox. (There are other related paradoxes and philosophical arguments covered in the Stanford Encyclopedia entry on relative identity. I haven't reviewed that entry yet, so I'm not sure how similar or dissimilar my points here will be to anything on that site. I'm only offering the link as a suggestion--to myself as well as to others--for further research.)

The issue comes down to this: that we use different methods or standards for deciding on questions of identity depending on the circumstances. In some cases, the identity of an entity is determined by the identity of its component parts. This is called the Mereological Theory of Identity (MTI). For example, a bicycle that has been disassembled and then later reassembled is still the same bicycle, plausibly because enough of the parts are the same. Yet, in other cases, MTI does not seem to apply. Consider: Every part of the Ship of Theseus is gradually replaced. Eventually, every part of the ship is different, but the identity of the whole remains intact.

We are forced to look beyond MTI for some other criterion of identity. Spatio-Temporal Continuity (STC) is one candidate. The Ship of Theseus is changed gradually, so that its existence is not interrupted in space or time. The STC theory of identity works, but not in all cases. It doesn't work for the bicycle that was disassembled and later reassembled. So we are left with at least two distinct and incompatible theories of identity, two mutually exclusive notions of what it means to be something.

We might try to find a unifying principle here, something capable of uniting these two criteria into a single criterion of identity. This, I think, is related to what Kripke and Putnam were up to in their theorizing about natural kinds. Basically, they argued that identity is a matter of causality. The bicycle which has been disassembled and reassembled is the same bicycle, not because it has the same parts, but because the reassembled bicycle stands in the right sort of causal relationship to the original bicycle. Similarly, the new Ship of Theseus stands in the right sort of causal relationship to the old one.

This approach makes intuitive sense, but it only glosses over the problem. We are left wondering what "the right sort of causal relationship" means, and why two remarkably different criteria can both be "right." What determines the rightness here, if not the two divergent criteria we were trying to unify in the first place?

It would seem that Kripke/Putnam do not offer a unifying principle, but only a means of ignoring the divergence. We still want to know why the notion of identity does not reduce to a single criterion when our understanding of identity seems to be singular.

Instead of trying to unify the criteria, we could respond with a resounding "so what?!" We might acknowledge that there are divergent criteria for identity, and that we have been misled into thinking that identity is a singular phenomenon.

There is nothing obviously wrong with this response, though it does force us to wonder why the notion of identity seems to be singular, when it really isn't. That is, it leaves us wanting to know how the notion of identity has come about and why we are so easily misled about it.

As some of my earlier posts have indicated (see here, here, and here), my response to this issue is decidedly Wittgensteinian. We can understand the notion of identity by understanding how the language of identity is used. To understand why we think of identity as a singular phenomenon, we must understand what is happening when we regard entities as such. This goes for everyday objects as well as people. Personal identity is not a special case. To understand what it means to be a person, we must understand what is happening when people are regarded as such.

The notion of personal identity is not established by introspection. I do not first know that I exist, and then wonder about whether or not there are other minds. Rather, I learn that I exist by learning that I am not other people. Identity (including personal identity) is a social phenomenon. I can refer to myself according to any number of criteria of identity (there need not be only the two mentioned earlier in this post), in so far as there is a community in which the rules for reference make sense. The meaning of identity is not necessarily a matter of causality, though having "the right sort of causal connection" might be part of any number of rules for identity.

We are so misled by the notion of identity because we misunderstand the nature of language. Rather than worrying about a mind/body problem or any paradoxes of identity, we should instead look at how language works.

A Possible Paradox of Personal Identity

I say "possible paradox" because I am not ready to commit to the idea that there is a paradox here at all. I only want to suggest that there might be a legitimate paradox implicit in the notion of personal identity.

It does not require an extreme amount of philosophical sophistication to recognize that personal identity is somewhat elusive. It is easy to see how the notion of identity breaks down when we think of it in terms of the body. For, we would still call ourselves by the same name were we to lose any or all limbs, organs, or bodily functions, so long as we had a set of memories or perceptions which defined ourselves as such. We are thus tempted to locate the essence of personal identity in the brain, in those processes which ground our memories and perceptions. Yet, even here we realize the notion of identity lacks foundation. For we can imagine our memories and perceptions being simulated by something other than our brain, a biological clone or computerized twin which would think of itself just as we think of ourselves.

The final temptation, then, is to think of personal identity as an abstraction, as a formal pattern or conventionally defined set of properties which do not depend on any particular organism for its existence. The self is an idea, and not a thing. It is thus supposed that there is a mind/body problem, an idea/thing problem, which philosophers must solve in order to understand what it means to be a person.

What we must pause to recognize is that the idea of this "problem" is based on the temptation to think of identity as an abstraction, as opposed to a thing. The question is, what is gained by giving in to this temptation? In other words, what question is answered by thinking of identity in these terms?

My suggestion is that nothing is solved at all. The temptation to postulate mind/body dualism offers no rewards. It tempts us, not because of any understanding it offers, but precisely because it offers an excuse for not understanding the nature of identity. When we say that the self is an abstraction, an idea, and not a thing, we are not explaining anything; we are only offering an excuse for not being able to explain something.

To see this more clearly, try to decipher what understanding might be gained by thinking of the self as an idea, as opposed to a thing. Consider, for example, what it would mean if some clone or computer simulated your identity. Under what conditions would you agree that, yes, that clone/computer was actually you?

I suggest that there are no such conditions. There is no possibility of any organism/machine seeming to you to be you. Of course, a machine could seem to somebody else to be you. But to you, it will always seem different. So long as you exist, then, there is a real, tangible difference between your identity and the identity of a machine which simulated you to any imaginable degree of perfection.

Imagine a clone of you is made and provided with an exact replication of your mental states. With each divergent experience, you and your clone will become different people. There are thus obvious experiential grounds for recognizing that you and your clone will not be the same. Yet, it is also obvious that some time must elapse before these differences become significant. There is a period, however brief, during which you and your clone have identical (or identical enough) mental states and physiological characteristics. During this period, you and your clone look at each other and utter in unison, "That's not me!"

There is no significant difference in you and your clone's perception of a self. There is no significant different in your physiology. Yet, to each of you, the other is not you. Thus, the idea that your identity is defined by any such mental properties, however tempting, is no more accurate than the idea that your identity is defined by the physical particulars of your body. Just as the notion of identity collapses when we try to locate it in the body, so does it collapse when we try to locate it in the mind, or any formal properties thereof.

This would seem to make the notion of identity a paradox. The notion clearly refers in a great many ways. It serves a valuable purpose in our lives. Yet, it has no discernable referent. At least, nothing which in all cases can be identified as such.