Philosophy, Film, Politics, Etc.

Saturday, July 25, 2009

Karen Armstrong's "The Case for God"

Karen Armstrong is an advocate for NOMA, the principle which says that science and religion involve unique, non-overlapping domains of human interest. In Armstrong's view, each offers a distinct way of finding truth. Science comes from logos, the path of logic, reason, and evidence. Religion, on the other hand, comes from mythos, tapping into the mythological, emotional, intuitive path to wisdom. She believes that religion and science are both necessary for humanity, but that problems arise when either one attempts to invade the other's epistemological territory.

Anyone familiar with my views on religion will know that I find this point of view highly problematic. For one thing, it romanticizes both science and religion. More importantly, the distinction is without practical sense. It cannot be justified by appeal to either logos or mythos, and so any disputes cannot be resolved by common agreement.

Here are two clever responses to her latest book, The Case for God. Both articles are from The Guardian.

The first is a review by Simon Blackburn. I have only one point to add to Blackburn's insightful criticism: He forgets to point out the self-contradiction in Armstrong's perspective. She argues, using reason and evidence, that religion is about something which cannot be intellectualized by reason or evidence.

The second article is a humorously scathing "digested read" by John Crace.

I wonder how the responses in major American newspapers compare. Unfortunately, I expect they are a lot less critical, and a lot more empty-headed.

Wednesday, July 22, 2009

Induction and Scientific Reasoning

In this post I argue that enumerative, or "simple" induction (henceforth "induction")* does not play a significant role in scientific discovery. I construct this argument within a framework of epistemological behaviorism.


I. The Meaning and Value of Science

As I understand it, knowledge is another word for ability. Scientific knowledge is predictive ability, which is the ability to organize our behavior in accordance with the unfolding of nature.** In other words, science is the process of learning how to predict what is going to happen in new situations. Scientific knowledge is demonstrable in so far as the abilities it engenders are demonstrable and reliable.

Science is not the process of describing what has already happened, nor is it the process of describing what is happening at any given moment. Of course, science can help us understand what has already happened and what is happening right now. But the focus of science is always on the future, not on the present or the past.

Facts, not theories, are representations of the world. Theories are tools for constructing representations of the world. The theory/fact distinction does not refer to some distinction that exists beyond our discourse. The distinction between theories and facts helps us understand the relationship between knowledge and action. The meaning of a theory is in its ability to generate facts, and the meaning of a fact is determined by its relationship to our behavior. Knowledge, as such, does not exist as information stored in the brain, or on paper, but rather in the way information relates to behavior.

Scientific theories do not approximate extra-theoretic truth. There is truth in science in so far as science works; which only means that scientific practice engenders abilities to predict how things in the world will behave. In other words, scientific theories help us regulate our own behavior more effectively and in new ways. Truth is thus defined in terms of human behavior. When we say a scientific theory is true, it is quite like when we say of an accomplished archer that his aim is true. For example, quantum mechanics provides a formal method (or set of methods) for regulating our behavior in specific domains--domains which were not open to us before quantum mechanics was established.


II. Induction and Observation


Theoretical/experimental frameworks precede observations. There could be no observation of quantum entanglement before there was a theoretical framework that provided a mathematical model for it. The observations make sense because we have a way of understanding them.

Consider: There are observations in quantum mechanics we still cannot explain, such as what some call "wave/particle duality." When we say scientists observe duality, what we mean is that their observations cannot be explained by a single model, but indicate the need for two apparently incompatible models. Thus it is more accurate to say that we are not sure what scientists are observing, because we lack a unified theoretical framework to make sense of the behavior.

Could induction help us solve this problem? Can we resolve the issue of wave/particle duality by inductive means?

We might say that, because we consistently observe wave/particle duality, wave/particle duality is a fact of nature. Thus, we now have a scientific theory which says wave/particle duality is true. But all we have done here is moved from a situation in which we lacked a unified framework for interpreting our observations to a situation in which we claimed that our intitial, confused interpretation of our observations is true. What understanding would be gained by this move? How could it be called scientific?

If science were to proceed in this way--if this was how scientific theories were built--science would be a hopelessly circular enterprise. We would end up saying, "of course we observe wave/particle duality, because that is what our theory predicts!" This is to mistake an observation for a theory. Fortunately, science does not work this way.

A scientific theory does not simply say "wave/particle duality is true, because we have observed it to be true." Rather, scientific theories provide frameworks for making new predictions. They are not limited to what has already been observed. This is what makes them testable. It is how they open the horizons of human behavior in unexpected ways. This is where the value of science is had.

Inductive reasoning does not tell us anything beyond what we have already observed; it only attempts to account for our observations by defining them as instances of a general rule. But this is no accounting at all. It is not science.


III. Is there any role for induction in science at all?

There are two possible places to look for an answer to this question: in the production and in the testing of hypotheses.

First, consider whether induction could play an important role in coming up with testable hypotheses. Charles Sanders Peirce, the American pragmatist, claimed there is a different sort of reasoning which we use to come up with scientific hypotheses. He called it abductive reasoning. This is the process whereby we construct theories to account for facts which do not fit into our prevailing interpretive framework. Yet, it is not clear that this process follows any rules, methods, or principles. In any case, the suggestion that it depends upon induction is unwarranted. In fact, induction cannot play a pivotal role here.

If we observe a number of cases of X, but we have no explanatory framework for understanding X, how could induction help us produce a theory for explaining X? Induction only produces the claim that X is a rule or law of nature. This is not a scientific hypothesis, because it makes no novel predictions. Thus, in the formation of hypotheses, induction has nothing to offer.

Let us instead consider whether induction might play a role when we test hypotheses against observation. Are we using inductive reasoning to generalize from our observations? For example, the conservation of momentum has been supported by a great deal of experimental data. Does that mean that, after X number of trials, the meaning or relevance of the theory was finally established?

Clearly the meaning of the theory was not induced by the observations. The hypothesis preceded the experimental results. Was the relevance of the theory induced? Relevance to what, exactly?

We might say that, after a theory has been corroborated by a number of tests, its relevance to humanity has been established. But this is not induced. The relevance of the theory is deduced from the fact that the theory works.

Or, we might say that the relevance is not to humanity, but to the universe itself. But what relevance could a theory have to the universe, apart from the relevance it has to humanity?

Theories are relevant in so far as they engender ability. Their meaning and value is defined in terms of behavior, after all. So there is no sense in trying to define relevance to the universe as a whole, apart from the contexts in which the theories are used.

I think we can conclude that the meaning or relevance of a theory is never induced from observation.

It may be claimed that the meaning of the observations requires induction; that, in order to claim that the observations have some relevance beyond their particular time and place, we must use inductive reasoning. Yet, we should not forget that the interpretation of an observation already requires that it have meaning beyond the particular event it represents. For an observation to be regarded as such, it must make sense within some theoretical framework. Thus, we cannot claim that the meaning or relevance of an observation with respect to a theoretical framework requires induction without also claiming that the meaning of the theoretical framework requires induction. But this has already been shown to be incorrect. The meaning of a theoretical framework is not induced from observations or particulars.


IV. Conclusion

The search for a role for induction in scientific reasoning seems to lead us in circles. It might indicate a real problem for philosophers, if we had some reason to think that induction just has to play a role in scientific reasoning. Fortunately, we do not. There is no reason to think that induction plays a significant role in scientific reasoning. So I think we can let the matter drop.


Notes

* The Stanford Encyclopedia of Philosophy's entry on the problem of induction indicates that this usage of the term "induction" is somewhat antiquated, and that the term is now sometimes taken to refer to a large range of synthetic or contingent judgments, or perhaps even to all contingent judgments. This seems to confuse the language and makes it impossible to identify what the term "induction" is supposed to mean. In any case, the so-called "problem of induction" is quite specifically about enumerative induction, and it is with respect to this historical debate that I am writing. I see little reason to extend the term "induction" to other cases. In fact, I suspect that the confusion surrounding contemporary uses of the term indicate conceptual confusions within philosophy; I thus would prefer sticking to the traditional usage, if only as a lifeline when trying to navigate those murky waters, until I find a compelling reason to abandon it.

** In retrospect, I would make an important distinction here. Predictive abilities are not the only abilities which organize our behavior in accordance with the unfolding of nature. There are more basic anticipatory abilities. Prediction requires language. More p
rimitive forms of anticipation require less sophisticated forms of representation. [This post was edited to include this footnote.]

Monday, July 20, 2009

A Brief History Of The Philosophy Of Science [Revised Edition]

A friend of mine recently gave me some advice: Don't let your philosophical pursuits get side-tracked by the atheism-vs.-religion debate. I pointed out that the history of modern science and modern philosophy is inextricably tied to the debate between atheism and religion. The philosophy of science has, since Descartes and Bacon, developed in explicit reaction to religious practice. The pursuit of scientific foundations has partly been the pursuit of intellectual liberation from religious dogma.

Around 1600, many rapid advancements in philosophy and science began to change the way people understood themselves and their relationship to the world. For example, Copernicus challenged conceptions of humanity by suggesting that the earth was not at the center of the universe. Galileo challenged conceptions of nature by reducing it to mathematical terms. The philosophy of science became a central issue in intellectual life.

There is a widespread misconception that science itself began during this relatively recent period in history. Of course, the most advanced, formal areas of science, such as physics and chemistry, started to take their current forms during the Renaissance. However, science, as the process of formalizing discovery, is much, much older. It probably began when the knowledge required to produce fire and tools was first communicated within communities. The Renaissance did not see the beginning of science; rather, it saw the beginning of an effort to define science as a distinct enterprise from theology.

Having seen what happened to Galileo and Bruno, Renaissance philosophers and scientists knew that, if they were going to challenge church teachings, they had better be sure their arguments rested on firm foundations. It was thus thought necessary to find ultimate principles or foundations for scientific knowledge.

Rene Descartes proposed the method of hyperbolic doubt to establish foundations of knowledge which were thought to exist independently of, but in harmony with, theology. Francis Bacon proposed the method of induction, which Sir Isaac Newton later incorporated into his Principia Mathematica.

Earlier, in the Middle Ages, theology and philosophy were one and the same. People like William of Ockam and Roger Bacon had made recognizable contributions to the philosophy of science, though we call it that only in retrospect. Of course, the Renaissance and modern era also produced great scientists who maintained strong intellectual ties to theology, including Newton. Yet, even Newton was intent on establishing principles of science which did not depend on theology or church doctrine for their validity.

Historically speaking, it is clear that modern philosophy has been heavily marked by its relationship to theology and religion. Modern science emerged as a competing form of authority, and its legitimacy was thought to be determined by the validity of its necessary principles.

This view of science persists today. Yet, ever since David Hume published his A Treatise of Human Nature in 1739, there has been good reason to doubt that absolute foundations for science can be had. Hume demonstrated that judgments about nature and cause-effect relationships do not inevitably follow from reason or observation, but are only so many habits of thought.

In the early 20th Century, the logical positivists tried to overcome this problem (known as the problem of induction) by defining religious authority out of existence. Meaning, they said, could only be scientific, because meaning is defined in terms of verification. Unlike scientific theories, religious proclamations are not verifiable. However, this argument seems to be self-defeating; for how could one verify that all meaning is defined in terms of verification?

Sir Karl Popper rejected logical positivism, promoting falsificationism as the basic philosophical principle of scientific discovery. Unlike religion, scientific theories have predictive value. That is, they can be demonstrated to be false. Their meaning is not dependent on their conditions for verification, as the logical positivists thought. Rather, their status is dependent on their conditions for falsification. Scientists do not verify theories by finding X number of examples. They thus do not depend on induction. Rather, scientists deduce predictions from theories, and test the theories accordingly. Popper thus avoided the problem of induction and the circularity of verificationism. He was able to distinguish scientific authority, not according to any indubitable foundations, but according to its ability to produce testable hypotheses about the world.

Popper's criterion remains popular among scientists today, and a common reason for rejecting religious authority is its lack of falsifiability. Yet, falsificationism has been criticized in the philosophy of science. For example, it has been observed that science does not always proceed by clear-cut cases of falsification. When data contradicts theory, ad hoc adjustments are often made. Also, problematic data is sometimes ignored for any number of reasons: it may be dismissed as a statistical outlier, or it is thought to be the result of human error or faulty equipment. While falsificationism helps us understand what distinguishes science from religion in theory, it does not explain how science always proceeds in the real world.

According to Thomas Kuhn, scientific advancement is most dramatically made when new paradigms change the way we interpret the data. Scientific theories are not abandoned because of falsifying data. Rather, they are replaced by new ways of thinking which change the way that data is understood. This view is not necessarily opposed to falsificationism, however. We might say that paradigm shifts are justifiable precisely because new paradigms have more predictive value.

Then Paul Feyerabend argued that science does not advance according to any fundamental rules or methods at all. In fact, he said, what people think of as "science" is a myth. He thus advocated an anarchistic, "anything goes" approach to science.

Finally, neo-pragmatists like Richard Rorty (following Wittgenstein) said there was no such thing as scientific advancement at all; at least, not if we think of scientific progress in terms of getting closer and closer to some absolute Truth-with-a-capital-"T." New scientific theories are just new ways of talking. There is no absolute, extra-theoretical standard to judge, for example, whether Einstein's theory of gravity is better than Aristotle's. Einstein and Aristotle spoke different languages, and we adopt yet another language when we compare them. Thus, there are no absolute foundations. No indubitable standards for measuring progress. No all-encompassing methods or principles. There is only language and how we use it. As Wittgenstein said, language is a tool with innumerable functions, and its meaning is how it is used.

While this may seem to undermine the validity of science, the opposite is the case. It does not undermine any supposed foundations which scientists depend upon for their work. On the contrary, it undermines the motivation for looking for ultimate foundations. This proves to be as much of a liberation from religious dogma as anyone could want; for the lack of foundations here applies as much to religion as it does to science.

The "modern" sciences have become dominant forces in the world, largely thanks to political as well as scientific advancements. Their formal tools and methods have produced previously unimaginable developments, ever broadening the scope of human understanding. Scientists rarely feel any pressure to justify their practices, no longer feeling the need to appeal to ultimate foundations. The tangible results of the process are where justification is found. Meanwhile, religion is struggling to maintain its status as a legitimate form of authority.

Science and religion are still competing forms of authority. Many people like me, not all of them self-described atheists, reject religious authority as unfounded and regard every aspect of our existence as being open to discovery. We are derided as followers of "scientism" or "naturalism." Our detractors claim that naturalism requires faith in the methods and principles of science. They say religion is better than science when it comes to informing our lives, because it rests on the firm, absolute foundation of God's will. They say science cannot answer important questions, that science cannot explain important aspects of our lives, such as the very desire to search for meaning in the universe.

These objections are all of them unfounded. The appeal to God's will has no discernible meaning. The demand for foundations is itself unfounded. There is therefore no basis for excluding any aspect of experience from science's grasp. We cannot decide ahead of time what is or is not open to formal discovery. Science is dynamic, disparate, and evolving. Perhaps its most salient feature is its theoretical unpredictability.

Of course, science does have recognizable general features. There are standards that scientists generally adhere to, such as logical consistency, Ockam's Razor, and Popper's criterion of falsifiability. However, these are not absolute foundations. They do not establish the ultimate legitimacy or authority of any science. Rather, they describe aspects of formal discovery. Consistency, parsimony, and falsifiability are guiding principles scientists try to follow. Ockam's razor and logic help maintain the integrity of formal discovery, while falsificationism reminds scientists that success depends on the value their theories can demonstrate.

The point is, scientists do not have faith that any particular methods or principles work. Rather, scientists simply develop and use new methods and principles because they produce tangible results. Scientific progress is made when new methods of discovery are adopted, methods which replace or expand upon the methods previously used. Scientific theories are adopted, not believed. When we say that a scientific theory is "true," we only mean that it works. We do not believe in science as a matter of faith. We use it as a matter of practice.

Scientific knowledge is not just a collection of theories; it is the practical abilities engendered by theoretical frameworks. The meaning and value of science is determined by the results of its practice, and not in the search for its foundations. There is no need to look for absolute foundations, and there is no reason to wonder why none were ever found. The outcome of scientific discovery is its continuously developing foundation.

The burden of proof is not on science to justify itself. In fact, there is no single belief system or institution called "science." There is thus nothing to justify as such. There are many scientific methods and institutions, and there will be many more, and we have every right to call them into question. However, if we do so, we must have a reason. What reason do critics of science have for questioning its practices? How do they justify their authority to question science?

Appealing to God no longer gets the job done.

The success of the Enlightenment was not in establishing firm foundations for scientific knowledge. It was in revealing the lack of absolute foundations for knowledge in general, including theistic knowledge. Modernity's greatest triumph was not just in furthering the sciences, but in freeing discovery from the restrictions of religious authority. That is progress.

Saturday, July 18, 2009

Some Thoughts On Ockam's Razor and Induction

A blogger name John Pieret has also criticized Sean Carroll's article about naturalism, though I think his criticisms are somewhat misguided. I just left the following comment on his blog:

I have major problems with Carroll's treatment of naturalism and supernaturalism. (See here: Discovery, Demonstration, and Naturalism.) However, I don't agree with all of your points; specifically about Ockam's Razor and the so-called "problem of induction."

Ockam's Razor is an indispensable explanatory tool. Consider the situation with ID again.

IDers might claim that ID is simpler than natural evolution, that Ockam's Razor weighs in their favor. The question is, are they right?

The answer is: of course not. Natural evolution does not postulate any entities beyond our explanatory framework, and it does not postulate anything superfluous. It does not postulate entities beyond necessity.

ID, on the other hand, postulates an "intelligent designer" which is outside of our explanatory framework, which is not (and, some IDers would say, cannot be) explained. ID does not explain how the "designer" has done anything. It does not explain anything.

Ockam's Razor is not the principle of least effort. If it were, then any predictive theory would lose against hand-waving. No, the razor does not favor the argument which requires the least amount of work. Rather, it says, the best explanation is the one that does not postulate unnecessary entities. (Necessity is recognizable by comparing two competing theories.) Clearly the razor favors natural evolution, and nothing "supernatural."

Okasha's argument [that Ockam's Razor suggests that nature is simpler than it might really be] is not convincing, because the razor does not require us to claim that nature is "not complex." Rather, it requires us to recognize the uselessness of postulating unnecessary entities. Our theories should be as complex as necessary to make good predictions, and no more. This does not require making any assumptions about how simple or complex nature really is.

As far as the problem of induction and the Alpha Centauri example: I think Carroll did misrepresent the scientific view there. It is not that scientists claim outright, "of course momentum was conserved!" Rather, they predict that momentum is conserved, and maintain that attitude unless given strong enough reason to change it.

There is no problem of induction in science, however, because predictions are not induced by finite sets of examples. Rather, they are deduced from theoretical frameworks which define examples as such.

Friday, July 17, 2009

Discovery, Demonstration, and Naturalism

Over the past several months or so, I've approached discussions of science by focusing on the concept of discovery. Science is the formalization of discovery. There are as many scientific methods as there are formal methods for discovering phenomena. Accidental discoveries can be utilized by science, in so far as they can lead to formal methods of discovery.

It occurs to me that this view of science, while essentially valid, might be easier to communicate if I adopt another term: demonstration. Science deals in what is demonstrable. A method of discovery is a set of rules or procedures for demonstrating facts. A scientific method is defined by its rules and procedures for demonstrating facts about the world.

We might ask, what is being discovered here? Is it the rules, or is it the facts?

The answer is: both. A scientific discovery is defined in terms of the facts discovered as well as the rules/procedures for demonstrating those facts. It is the relationship between rules and facts which defines a scientific theory as such.

This leaves the question: What is the purview of scientific discovery? Are there some things which cannot be discovered/demonstrated, but which act as guiding forces in our lives?

Sean Carroll approaches this question in a recent contribution to Discover Magazine entitiled "What Questions Can Science Answer?" Carroll does a decent job of describing the way science approaches questions of fact. He also points out the need to clearly define our terms. Yet, while he says the term "science" is well enough understood, he offers a problematic definition of "natural."

In what is essentially a definition of "naturalism," he writes: "By 'natural' I simply mean the view in which everything that happens can be explained in terms of a physical world obeying unambiguous rules, never disturbed by whimsical supernatural interventions from outside nature itself."

Scientists do not rely on any notion of the supernatural to formulate their conception of nature. So why include the word "supernatural" in the definition of "natural?"

Also, it is not clear that the world "obeys unambiguous rules." It is not even clear what that means. Perhaps Carroll means the world acts in a manner consistent with the predictions made by unambiguous rules. If we remove the part about the supernatural and clarify the reference to rules, we end up defining "nature" as that which can be methodically demonstrated. Nature is what is open to formal discovery, by definition.

Naturalism is not a scientific hypothesis which might eventually be falsified. It is not a conclusion based on scientific evidence. Naturalism is true by definition. It is a framework for talking about discovery and demonstration. It is a language for understanding the relationship between knowledge and action. The words "nature" and "science" go together like "bachelor" and "unmarried."

This makes it easier to correct a misconception regarding quantum mechanics. In quantum mechanics, there are events or relationships which are somewhat unpredictable, or "whimsical." This is not to say they suggest a "supernatural" or any other kind of intervention, though. The notion of intervention implies directedness, and there is no evidence that quantum unpredictability is somehow directed towards any ends. The whole point is that it lacks direction.

Quantum unpredictability does not undermine a naturalistic view of the world. It does not undermine the meaning or value of science. In fact, quantum unpredictability is quantified scientifically. Scientific theory predicts a certain degree of unpredictability, and measurements of that unpredictability can be tested against scientific theory. The so-called "whimsical" aspects of quantum mechanics might defy common sense, but they do not defy scientific practice.

The point to stress here is that, if there were some intent or direction behind these events, some force which guided the course of nature, it would be scientifically discoverable. It would be worthy of the term "nature."

Carroll's argument misses this important point. He takes the terms "natural" and "supernatural" to indicate equally coherent, competing theories of the world. He says: "The preference for a natural explanation is not an a priori assumption made by science; it’s a conclusion of the scientific method. We know enough about the workings of the world to compare two competing big-picture theoretical frameworks: a purely naturalistic one, versus one that incorporates some sort of supernatural component."

Carroll's view is wrong, and in a dangerous way.

The term "naturalism" has no place in science. It is a political rejoinder against supernaturalists, and nothing more. Supernaturalists wish to protect their beliefs and institutions from scientific scrutiny, simultaneously promoting the contradictory claims that they are beyond science's purview and that science must change to allow for their beliefs. Yet, the term "supernatural" remains incoherent.

Supernaturalism is not a philosophical or scientific point of view. It is a political strategy to corrupt our understanding of the relationship between knowledge and action, and our understanding of science and nature should not be compromised by it. When Carroll says naturalism offers "a more compelling fit to the observations," he is playing into a political trap. He is opening the door to a debate about how well naturalism is supported by the evidence; as though naturalism were defined in testable terms. At best, this will lead to confusion. At worst, it will fan the flames of ignorance about the value and meaning of science.

Thursday, July 9, 2009

More on the supposed paradox of identity

In my last post, I suggested there might be a paradox regarding personal identity. Specifically, it seems that our ability to refer to ourselves is undeniable, and yet we cannot specify what, exactly, we are referring to. Not even a purely abstract mind-stuff would seem to get the job done. So, what are we talking about, when we talk about personal identity?

If there is a paradox here, it is probably the same as the classic Ship of Theseus paradox. (There are other related paradoxes and philosophical arguments covered in the Stanford Encyclopedia entry on relative identity. I haven't reviewed that entry yet, so I'm not sure how similar or dissimilar my points here will be to anything on that site. I'm only offering the link as a suggestion--to myself as well as to others--for further research.)

The issue comes down to this: that we use different methods or standards for deciding on questions of identity depending on the circumstances. In some cases, the identity of an entity is determined by the identity of its component parts. This is called the Mereological Theory of Identity (MTI). For example, a bicycle that has been disassembled and then later reassembled is still the same bicycle, plausibly because enough of the parts are the same. Yet, in other cases, MTI does not seem to apply. Consider: Every part of the Ship of Theseus is gradually replaced. Eventually, every part of the ship is different, but the identity of the whole remains intact.

We are forced to look beyond MTI for some other criterion of identity. Spatio-Temporal Continuity (STC) is one candidate. The Ship of Theseus is changed gradually, so that its existence is not interrupted in space or time. The STC theory of identity works, but not in all cases. It doesn't work for the bicycle that was disassembled and later reassembled. So we are left with at least two distinct and incompatible theories of identity, two mutually exclusive notions of what it means to be something.

We might try to find a unifying principle here, something capable of uniting these two criteria into a single criterion of identity. This, I think, is related to what Kripke and Putnam were up to in their theorizing about natural kinds. Basically, they argued that identity is a matter of causality. The bicycle which has been disassembled and reassembled is the same bicycle, not because it has the same parts, but because the reassembled bicycle stands in the right sort of causal relationship to the original bicycle. Similarly, the new Ship of Theseus stands in the right sort of causal relationship to the old one.

This approach makes intuitive sense, but it only glosses over the problem. We are left wondering what "the right sort of causal relationship" means, and why two remarkably different criteria can both be "right." What determines the rightness here, if not the two divergent criteria we were trying to unify in the first place?

It would seem that Kripke/Putnam do not offer a unifying principle, but only a means of ignoring the divergence. We still want to know why the notion of identity does not reduce to a single criterion when our understanding of identity seems to be singular.

Instead of trying to unify the criteria, we could respond with a resounding "so what?!" We might acknowledge that there are divergent criteria for identity, and that we have been misled into thinking that identity is a singular phenomenon.

There is nothing obviously wrong with this response, though it does force us to wonder why the notion of identity seems to be singular, when it really isn't. That is, it leaves us wanting to know how the notion of identity has come about and why we are so easily misled about it.

As some of my earlier posts have indicated (see here, here, and here), my response to this issue is decidedly Wittgensteinian. We can understand the notion of identity by understanding how the language of identity is used. To understand why we think of identity as a singular phenomenon, we must understand what is happening when we regard entities as such. This goes for everyday objects as well as people. Personal identity is not a special case. To understand what it means to be a person, we must understand what is happening when people are regarded as such.

The notion of personal identity is not established by introspection. I do not first know that I exist, and then wonder about whether or not there are other minds. Rather, I learn that I exist by learning that I am not other people. Identity (including personal identity) is a social phenomenon. I can refer to myself according to any number of criteria of identity (there need not be only the two mentioned earlier in this post), in so far as there is a community in which the rules for reference make sense. The meaning of identity is not necessarily a matter of causality, though having "the right sort of causal connection" might be part of any number of rules for identity.

We are so misled by the notion of identity because we misunderstand the nature of language. Rather than worrying about a mind/body problem or any paradoxes of identity, we should instead look at how language works.

A Possible Paradox of Personal Identity

I say "possible paradox" because I am not ready to commit to the idea that there is a paradox here at all. I only want to suggest that there might be a legitimate paradox implicit in the notion of personal identity.

It does not require an extreme amount of philosophical sophistication to recognize that personal identity is somewhat elusive. It is easy to see how the notion of identity breaks down when we think of it in terms of the body. For, we would still call ourselves by the same name were we to lose any or all limbs, organs, or bodily functions, so long as we had a set of memories or perceptions which defined ourselves as such. We are thus tempted to locate the essence of personal identity in the brain, in those processes which ground our memories and perceptions. Yet, even here we realize the notion of identity lacks foundation. For we can imagine our memories and perceptions being simulated by something other than our brain, a biological clone or computerized twin which would think of itself just as we think of ourselves.

The final temptation, then, is to think of personal identity as an abstraction, as a formal pattern or conventionally defined set of properties which do not depend on any particular organism for its existence. The self is an idea, and not a thing. It is thus supposed that there is a mind/body problem, an idea/thing problem, which philosophers must solve in order to understand what it means to be a person.

What we must pause to recognize is that the idea of this "problem" is based on the temptation to think of identity as an abstraction, as opposed to a thing. The question is, what is gained by giving in to this temptation? In other words, what question is answered by thinking of identity in these terms?

My suggestion is that nothing is solved at all. The temptation to postulate mind/body dualism offers no rewards. It tempts us, not because of any understanding it offers, but precisely because it offers an excuse for not understanding the nature of identity. When we say that the self is an abstraction, an idea, and not a thing, we are not explaining anything; we are only offering an excuse for not being able to explain something.

To see this more clearly, try to decipher what understanding might be gained by thinking of the self as an idea, as opposed to a thing. Consider, for example, what it would mean if some clone or computer simulated your identity. Under what conditions would you agree that, yes, that clone/computer was actually you?

I suggest that there are no such conditions. There is no possibility of any organism/machine seeming to you to be you. Of course, a machine could seem to somebody else to be you. But to you, it will always seem different. So long as you exist, then, there is a real, tangible difference between your identity and the identity of a machine which simulated you to any imaginable degree of perfection.

Imagine a clone of you is made and provided with an exact replication of your mental states. With each divergent experience, you and your clone will become different people. There are thus obvious experiential grounds for recognizing that you and your clone will not be the same. Yet, it is also obvious that some time must elapse before these differences become significant. There is a period, however brief, during which you and your clone have identical (or identical enough) mental states and physiological characteristics. During this period, you and your clone look at each other and utter in unison, "That's not me!"

There is no significant difference in you and your clone's perception of a self. There is no significant different in your physiology. Yet, to each of you, the other is not you. Thus, the idea that your identity is defined by any such mental properties, however tempting, is no more accurate than the idea that your identity is defined by the physical particulars of your body. Just as the notion of identity collapses when we try to locate it in the body, so does it collapse when we try to locate it in the mind, or any formal properties thereof.

This would seem to make the notion of identity a paradox. The notion clearly refers in a great many ways. It serves a valuable purpose in our lives. Yet, it has no discernable referent. At least, nothing which in all cases can be identified as such.