The Intentional Stance
One of Dennett's most well-known contributions to philosophy is the idea of the intentional stance. This is supposed to be a template for understanding minds and rational agency. However, Dennett's view of the intentional stance has shifted since he first began its formulation in the mid-1970s. His earliest work on the topic indicates the following thesis: An object is an intentional system just in case it is advantageous to regard it as such. Here he is, in 1976:
How does Dennett differentiate the grades here? Presumably it has to do with how many orders of belief attribution the object in question can muster. However, once we treat an object as an intentional system, there's no reason to stop at any particular level. As FC Young (1979) points out, we always have the option of multiplying orders of intentionality. We can always reframe our explanations in terms of highly complicated teleological terms. The fact that we can do it does not mean we should. So why shouldn't we?
Dennett's response to this problem has earned him a fair bit of criticism. His response is to try to have it both ways. On the one hand, he does not clearly give up his original position. To be an intentional system just is the ability to be regarded as one through a successful application of the intentional stance. On the other hand, he acknowledges objective constraints which determine whether or not we are justified in applying the intentional stance to an object. The problem is, he cannot have it both ways.
Dennett (1981) presents a situation where he thinks the intentional stance is obviously unjustified: his lecturn. Clearly such an inanimate object cannot be usefully regarded with the intentional stance. His argument is that the addition of teleological elements does not add to the predictive power we already have. This presupposes that we are not the sort of people to regard all objects as having teleological functions. It also means that there are constraints on what can or cannot justify the application of the intentional stance. Sometimes it gives us added value, sometimes it does not. If it does not, then it should be cut (via Ockam's Razor, basically). But in that case, being an intentional system is not merely being thought of as one (in the context of making successful predictions). It is being thought of as one when doing so is necessary to sustain our ability to make predictions.
The problem for Dennett is profound. If there are conditions on what should or should not be considered an intentional system, then our theory of intentionality should help us understand why. We want a theory of intentionality that gives us the criteria, or at least points us in the right direction. The direction Dennett gives us is not useful. His position amounts to this: The objective criteria for determining whether or not an object is an intentional system can be found by determining whether or not our ability to make predictions about that object requires that we regard it as an intentional system. What we want to know, of course, is how to determine if it is necessary to regard a system as an intentional system. Dennett's approach has us chasing our tail.
It is often argued that Dennett's position is a useful way of observing the similarities between human beings and other animals, and even computers. Part of Dennett's position is that there is no "magic moment" when systems become so complex that they become real rational agents, as opposed to the sort that are defined merely by taking up the intentional stance towards them. So, for example, it is said that even chess-playing computers are rational agents, because the intentional stance is highly useful (even necessary, it is argued) for predicting their outputs.
Indeed, if you are playing chess against a computer, you can imagine that the computer has belief and desires about the game. Dennett says your best strategy is to imagine that you are playing against a rational agent, a system which has true beliefs about the game and which wants to win. I am not sure. It may, in fact, be more productive to think of the computer as having been designed to simulate a chess-player without reproducing the beliefs and desires that a person would have while playing chess. There is no demonstrable need to imagine that the computer is a rational agent at all. We should therefore recognize that the computer simulates the behavioral outputs of a rational chess-player without having any beliefs and desires of its own.
How do I know the computer does not have any beliefs and desires of its own? Dennett provides us with the answer here: Because I do not gain any predictive value by supposing it has beliefs and desires of its own. I know it is a programmed simulation. I do not need to suppose it actually wants to beat me.
Dennett challenges us to find a situation in which we simply must regard a system as having beliefs and desires of its own. But that is not a hard challenge to meet at all. In your next conversation with a person--any person--talk to them as if they do not have beliefs and desires of their own. In a real-life situation with other walking, talking human beings, actually deny that they have beliefs and desires of their own. How far will that get you in explaining and understanding their behaviour?
But okay, that sort of challenge might be complicated, and it might be difficult to interpret the results. So here's a much simpler example: Yourself.
Dennett makes this very simple for us. He acknowledges that we all treat various objects as if they have beliefs and desires. We all take up the intentional stance. This is a given. It is also given that only rational agents can employ the intentional stance. It follows that we all must be rational agents. We could not employ the intentional stance if we were not.
You might say, "Sure, I know I am a rational agent, but how do I know you are?"
It is a reasonable deduction, given that we are of the same species and function in more or less similar ways. I certainly appear to you to be employing the intentional stance, and I certainly seem to function just like you. Evolutionary theory gives us the only empirically viable basis for understanding these commonalities. So it is reasonable for you to assume that other human beings are rational agents like you.
The question is, why should we think that other objects or systems have the same rational agency we do? Dennett's argument is that it is useful to act as if they do. While that is true, this usefulness has limits and can often lead us astray. Dennett should recognize that there is a difference between making good use of a metaphor and mistaking that metaphor for a literal truth. We need some way of distinguishing between the metaphorical use of the intentional stance and the literal one. Dennett does not offer one, and so his project is unsuccessful.
Dennett acknowledges that there is no doubting we are rational agents. Yet, he says that we might, in theory, be able to predict all of our behaviour without taking the intentional stance at all. We might do it with only the physical stance. This is one of his most controversial ideas: If we learned enough about our physical bodies, we would not need to regard ourselves as rational agents at all! Yet, we would still be rational agents. So, our physical account would be leaving out true facts about us. Recall that, to avoid regarding his lecturn as a rational agent, Dennett must acknowledge physical facts which determine whether or not he is a rational agent. So his complete physical account would seem to leave out the physical facts which determine that he is a rational agent. This is impossible.
Perhaps Dennett could say the physical description only leaves out those facts under the description that they determine he is a rational agent. However, this seems problematic. It is not clear what other sort of description would be available, since being a rational agent should be intelligible under the physical stance--if it were not, the fact that Dennett's lecturn is not intentional could not be decided by appeal to physical facts.
Here are Dennett's options, as I see them: (1) Appeal to the physical stance in order to ground rational agency (in which case the intentional stance collapses into the physical stance). (2) Regard rational agency as both real and irreducible to the level of physical cause and effect. (3) Claim that intentional states can be intelligible via the physical stance, but under a different description. This might be Dennett's best bet, but I think it is problematic, especially if Dennett claims that there are physical facts which we can appeal to in order to determine whether or not an object is a rational agent.
I should note that it will not do to try to reduce the intentional stance to the design stance. The design and intentional stances are not really distinct stances at all. To say that object x was designed is to say that it has a purpose or intention behind it. Designed objects are derived-intentional objects; when we make predictions according to design, we are making predictions about how they were intended to behave. We aren't supposing that the objects are the source of their intentionality, but we are still interpreting them as (extended) parts of an intentional system. The design stance is the intentional stance applied to artefacts of real or imagined intentional agents. We should be cautious about putting too much stock in the design stance for the same reason we want stronger criteria before we start believing that chess-playing computers are rational agents.
P.S. I don't have free access to it, but judging by the abstract, Paul Yu and Gary Fuller (1986) seem to make an argument which is in some ways parallel to my own.
An Intentional system is a system whose behavior can be (at least sometimes) explained and predicted by relying on ascriptions to the system of beliefs and desires (and other Intentionally characterized features--what I will call Intentions here, meaning to include hopes, fears, intentions, perceptions, expectations, etc.). There may in every case be other ways of predicting and explaining the behavior of an Intentional system--for instance, mechanistic or physical ways--but the Intentional stance may be the handiest or most effective or in any case a successful stance to adopt, which suffices for the object to be an Intentional system. So defined, Intentional systems are obviously not all persons.Dennett goes on to explain that even plants can be talked about as if they were intentional systems, which means that they are, lo and behold, "very low-grade Intentional systems."
How does Dennett differentiate the grades here? Presumably it has to do with how many orders of belief attribution the object in question can muster. However, once we treat an object as an intentional system, there's no reason to stop at any particular level. As FC Young (1979) points out, we always have the option of multiplying orders of intentionality. We can always reframe our explanations in terms of highly complicated teleological terms. The fact that we can do it does not mean we should. So why shouldn't we?
Dennett's response to this problem has earned him a fair bit of criticism. His response is to try to have it both ways. On the one hand, he does not clearly give up his original position. To be an intentional system just is the ability to be regarded as one through a successful application of the intentional stance. On the other hand, he acknowledges objective constraints which determine whether or not we are justified in applying the intentional stance to an object. The problem is, he cannot have it both ways.
Dennett (1981) presents a situation where he thinks the intentional stance is obviously unjustified: his lecturn. Clearly such an inanimate object cannot be usefully regarded with the intentional stance. His argument is that the addition of teleological elements does not add to the predictive power we already have. This presupposes that we are not the sort of people to regard all objects as having teleological functions. It also means that there are constraints on what can or cannot justify the application of the intentional stance. Sometimes it gives us added value, sometimes it does not. If it does not, then it should be cut (via Ockam's Razor, basically). But in that case, being an intentional system is not merely being thought of as one (in the context of making successful predictions). It is being thought of as one when doing so is necessary to sustain our ability to make predictions.
The problem for Dennett is profound. If there are conditions on what should or should not be considered an intentional system, then our theory of intentionality should help us understand why. We want a theory of intentionality that gives us the criteria, or at least points us in the right direction. The direction Dennett gives us is not useful. His position amounts to this: The objective criteria for determining whether or not an object is an intentional system can be found by determining whether or not our ability to make predictions about that object requires that we regard it as an intentional system. What we want to know, of course, is how to determine if it is necessary to regard a system as an intentional system. Dennett's approach has us chasing our tail.
It is often argued that Dennett's position is a useful way of observing the similarities between human beings and other animals, and even computers. Part of Dennett's position is that there is no "magic moment" when systems become so complex that they become real rational agents, as opposed to the sort that are defined merely by taking up the intentional stance towards them. So, for example, it is said that even chess-playing computers are rational agents, because the intentional stance is highly useful (even necessary, it is argued) for predicting their outputs.
Indeed, if you are playing chess against a computer, you can imagine that the computer has belief and desires about the game. Dennett says your best strategy is to imagine that you are playing against a rational agent, a system which has true beliefs about the game and which wants to win. I am not sure. It may, in fact, be more productive to think of the computer as having been designed to simulate a chess-player without reproducing the beliefs and desires that a person would have while playing chess. There is no demonstrable need to imagine that the computer is a rational agent at all. We should therefore recognize that the computer simulates the behavioral outputs of a rational chess-player without having any beliefs and desires of its own.
How do I know the computer does not have any beliefs and desires of its own? Dennett provides us with the answer here: Because I do not gain any predictive value by supposing it has beliefs and desires of its own. I know it is a programmed simulation. I do not need to suppose it actually wants to beat me.
Dennett challenges us to find a situation in which we simply must regard a system as having beliefs and desires of its own. But that is not a hard challenge to meet at all. In your next conversation with a person--any person--talk to them as if they do not have beliefs and desires of their own. In a real-life situation with other walking, talking human beings, actually deny that they have beliefs and desires of their own. How far will that get you in explaining and understanding their behaviour?
But okay, that sort of challenge might be complicated, and it might be difficult to interpret the results. So here's a much simpler example: Yourself.
Dennett makes this very simple for us. He acknowledges that we all treat various objects as if they have beliefs and desires. We all take up the intentional stance. This is a given. It is also given that only rational agents can employ the intentional stance. It follows that we all must be rational agents. We could not employ the intentional stance if we were not.
You might say, "Sure, I know I am a rational agent, but how do I know you are?"
It is a reasonable deduction, given that we are of the same species and function in more or less similar ways. I certainly appear to you to be employing the intentional stance, and I certainly seem to function just like you. Evolutionary theory gives us the only empirically viable basis for understanding these commonalities. So it is reasonable for you to assume that other human beings are rational agents like you.
The question is, why should we think that other objects or systems have the same rational agency we do? Dennett's argument is that it is useful to act as if they do. While that is true, this usefulness has limits and can often lead us astray. Dennett should recognize that there is a difference between making good use of a metaphor and mistaking that metaphor for a literal truth. We need some way of distinguishing between the metaphorical use of the intentional stance and the literal one. Dennett does not offer one, and so his project is unsuccessful.
Dennett acknowledges that there is no doubting we are rational agents. Yet, he says that we might, in theory, be able to predict all of our behaviour without taking the intentional stance at all. We might do it with only the physical stance. This is one of his most controversial ideas: If we learned enough about our physical bodies, we would not need to regard ourselves as rational agents at all! Yet, we would still be rational agents. So, our physical account would be leaving out true facts about us. Recall that, to avoid regarding his lecturn as a rational agent, Dennett must acknowledge physical facts which determine whether or not he is a rational agent. So his complete physical account would seem to leave out the physical facts which determine that he is a rational agent. This is impossible.
Perhaps Dennett could say the physical description only leaves out those facts under the description that they determine he is a rational agent. However, this seems problematic. It is not clear what other sort of description would be available, since being a rational agent should be intelligible under the physical stance--if it were not, the fact that Dennett's lecturn is not intentional could not be decided by appeal to physical facts.
Here are Dennett's options, as I see them: (1) Appeal to the physical stance in order to ground rational agency (in which case the intentional stance collapses into the physical stance). (2) Regard rational agency as both real and irreducible to the level of physical cause and effect. (3) Claim that intentional states can be intelligible via the physical stance, but under a different description. This might be Dennett's best bet, but I think it is problematic, especially if Dennett claims that there are physical facts which we can appeal to in order to determine whether or not an object is a rational agent.
I should note that it will not do to try to reduce the intentional stance to the design stance. The design and intentional stances are not really distinct stances at all. To say that object x was designed is to say that it has a purpose or intention behind it. Designed objects are derived-intentional objects; when we make predictions according to design, we are making predictions about how they were intended to behave. We aren't supposing that the objects are the source of their intentionality, but we are still interpreting them as (extended) parts of an intentional system. The design stance is the intentional stance applied to artefacts of real or imagined intentional agents. We should be cautious about putting too much stock in the design stance for the same reason we want stronger criteria before we start believing that chess-playing computers are rational agents.
P.S. I don't have free access to it, but judging by the abstract, Paul Yu and Gary Fuller (1986) seem to make an argument which is in some ways parallel to my own.
Comments