I rented Thor on DVD last night. I'm glad to say that much of the drama, comedy and action held up well on the second viewing. However, I want to add a bit to my initial review. Despite its notable flaws, I'd originally given the film four out of five stars. Unfortunately, I can only give the DVD version three-and-a-half, and that only barely. One reason is the lack of 3D. The 2D film isn't nearly as visually enthralling and impressive, and this makes it much easier to get distracted by the film's faults. When I left the theater the first time, I was charged and ready for more. After watching it again on DVD, I didn't feel much of anything at all. On top of that, I found one more plot point to criticize.
Philosophy, Film, Politics, Etc.
Sunday, September 25, 2011
Saturday, September 24, 2011
A while back I wrote a post in which I took a Wittgensteinian line against Descartes' cogito ergo sum. I was never all that happy with parts of it, and finally got around to fixing it. I'm not going to repost it, but here's the link: Descartes Contra Wittgenstein. I took out some parts that were a bit sophomoric and added a little to give a better sense of what Descartes was on about. I wouldn't say the piece is now perfect, but it's a lot better than it was before.
Posted by Jason Streitfeld at 4:08 PM
Thursday, September 22, 2011
Jason Stanley has posted part of his and Carlin Romano's recent "Philosophical Progress and Intellectual Culture" panel discussion.
There's a lot of humor and good spirit here, until Carlin Romano starts talking. Jason Stanley is a bit all over the place, but his points tie together nicely enough and are delivered with panache. I am entirely sympathetic with his presentation and point of view (except about the propositional nature of practical ability, but that's pretty irrelevant here, and I think Stanley is even a little tongue-in-cheek about it at the end). Then Romano gets up and immediately goes on the assault. His criticism of academic and analytic philosophy is incredibly arrogant and ignorant.
His most humorous error results from his lack of familiarity with Grice's notion of implicature. He quotes Stanley, who says that "asserting that p implicates knowledge that p." Romero interprets this as a ridiculous error. He thinks Stanley believes that only a person who knows that p could ever assert that p, that the mere making of an assertion implies that what is asserted is true. That's not what Stanley means at all. Stanley's point is rather that part of what is communicated in an assertion that p is that the person making the assertion knows that p. But what is communicated is not necessarily true. The making of the assertion does not in fact imply that the person knows that p. It only means that the meaning of the assertion includes a statement of knowledge. (If this isn't clear, consider: I cannot assert that p whilst simultaneously asserting that I do not know that p. For example, the sentence "It is raining, but I do not know that it is raining" is problematic.) Romano is apparently unaware of this idea, which means he can't have much knowledge of Grice and, by implication (not implicature), the philosophical tradition in which Stanley is working. As a result, he makes a ridiculous accusation against Stanley. This got some great reactions from the crowd.
What's impressive is not Romano's ignorance. You wouldn't expect anybody to get these subtle distinctions without training. But that's the point. Romano fails to recognize that he is not qualified to speak critically about Stanley's book, and yet he focuses his entire presentation on a criticism of that very book.
Romano thinks philosophical writing should be accessible for everybody. Or, if not everybody, at least for himself. He thinks he should be qualified to criticize every philosophical work. But he's not. If I were going to psychoanalyze, I'd suggest that he's insecure about his inability to understand the bulk of analytic philosophy. He expresses his frustration by criticizing academic and analytic philosophers for their inaccessible writing. As if it is their fault he can't understand them.
I wonder, would he make the same criticism for specialists in other fields, such as biology, physics, mathematics, or psychology, or does he think philosophy is such that it is not worthy of advanced specialization?
I'd like to see how the rest of the discussion played out. From what we can see here, Stanley showed a decent amount of restraint and generosity in his initial response to Romano.
UPDATE: There's a good discussion with several links to other parts of the conference at Leiter's blog.
Sunday, September 11, 2011
Posted by Jason Streitfeld at 11:10 AM
Wednesday, September 7, 2011
Tuesday, August 30, 2011
Sunday, August 28, 2011
For no particular reason, I just returned to this discussion at Butterflies & Wheels, in which Peter Beattie charges Massimo Pigliucci with two counts of deplorable argument. I was surprised to find that one of my comments had been deleted. Nobody had said anything about it. I don't follow B&W. That discussion is the only one in which I've ever participated, so I'm not sure what to think.
The deleted comment consisted of me pointing out to another commenter that I thought we had been "wasting our time on a clown with delusions of philosophical grandeur." I was speaking of Peter Beattie. I'm not sure, but I presume this is the Australian politician who was Premier of Queensland from 1998 to 2007. That's not why I called him a delusional clown. I called him a delusional clown because he was acting like one.
I don't blame anybody for deleting senseless insults, and if there was no justification for my comment, so be it. But sometimes harsh criticism is warranted. In this case, I believe the comment was both justified and accurate. I'll give a brief overview of the circumstances for those who aren't inclined to peruse the long thread. However, I do recommend the discussion to anyone interested in Sam Harris' The Moral Landscape and surrounding debates.
The problems started when I told Peter I thought he could have been a lot more charitable in his assessment of Pigliucci's review of Harris' book. Peter responded with a little condescension, telling me my "simple counter-assertion" was "not particularly helpful."
I elaborated. Peter claims that Pigliucci made two "deplorable" errors. I, in contrast, don't find anything deplorable about Pigliucci's review, however imperfect it may be.
First, Peter quotes Pigliucci, who wrote: "If these sentences do not conjure the specter of a really, really scary Big Brother in your mind, I suggest you get your own brain scanned for signs of sociopathology.” Peter responds as follows: "That anyone, let alone a professor of philosophy, should literally argue, ‘If you don’t agree with me, you should get your head examined’, is deplorable."
Let's look at Pigliucci's comment in context:
Indeed, Harris’ insistence on neurobiology becomes at times positively creepy, as in the section where he seems to relish the prospect of a neuro-scanning technology that will be able to tell us if anyone is lying, opening the prospect of a world where government (and corporations) will be able to enforce no-lie zones upon us. He writes: “Thereafter, civilized men and women might share a common presumption: that whenever important conversations are held, the truthfulness of all participants will be monitored. … Many of us might no more feel deprived of the freedom to lie during a job interview or at a press conference than we currently feel deprived of the freedom to remove our pants in the supermarket.” If these sentences do not conjure the specter of a really, really scary Big Brother in your mind, I suggest you get your own brain scanned for signs of sociopathology (or watch a good episode of Babylon 5).
Pigliucci's comment about getting your head examined needn't be taken as an argument at all, and certainly not the literal argument Peter says it is. Pigliucci's comment looks like a semi-humorous, if abrasive, way of saying that Sam Harris' views are bordering on the sociopathic. That's an observation, not an argument. Maybe Pigliucci's language was a bit unprofessional, but the tone of the review is clearly informal. I don't see anything deplorable about that. I have no doubt that Peter's interpretation is uncharitable and implausible. Yet he chose to deny this, claiming that either Pigliucci was making a deplorable argument, or he wasn't supporting his assertions with an argument at all, which would be "at least as deplorable" for a professional philosopher.
The second "deplorable" point is where things got more heated. According to Peter, Pigliucci has grossly misrepresented Harris. Here's what Pigliucci says:
Harris says: “Many of my critics fault me for not engaging more directly with the academic literature on moral philosophy … I am convinced that every appearance of terms like ‘metaethics,’ ‘deontology,’ … directly increases the amount of boredom in the universe.” That’s it? The whole of the only field other than religion that has ever dealt with ethics is dismissed because Sam Harris finds it boring? . . .Here's what Peter says in response:
Harris entirely evades philosophical criticism of his positions, on the simple ground that he finds metaethics “boring.” But he is a self-professed consequentialist — a philosophical stance close to utilitarianism — who simply ducks any discussion of the implicatons of that a priori choice, which informs his entire view of what counts for morality, happiness, well-being and so forth. He seems unaware of (or doesn’t care about) the serious philosophical objections that have been raised against consequentialism, and even less so of the various counter-moves in logical space (some more convincing than others) that consequentialists have made to defend their position. This ignorance is not bliss . . .
Harris excuses his omission of philosophical jargon by (only half-jokingly, I suspect) asserting that it every piece of it “directly increases the amount of boredom in the universe” (TML, 197n1). Pigliucci says this amounts to a dismissal of all of metaethics, that Harris finds it boring, that TML as a whole “shies away from philosophy”. (And so on and all-too-predictably on.) Not only is this implausible even given the quote that Pigliucci used; Harris explicitly gives his reasons for “not engaging more directly with the academic literature on moral philosophy”: he arrived at his position not because of that literature, but for independent logical reasons; and he wants to make the discussion as accessible to lay readers as possible. Again, in such a way to distort a position beyond recognition is deplorable.Peter's claim about jargon is plainly wrong. Harris explicitly says that he is avoiding many metaethical "views and conceptual distinctions." He is avoiding "much of the literature." He's not just leaving out the jargon. True, Pigliucci is being hyperbolic when he says Harris is dismissing the whole field of metaethics, but his criticism isn't so far off the mark. Other professionals have responded to Harris in a similar fashion. For example, in another review of Harris' book, Troy Jollimore writes:
It would be one thing to try to write intelligently about moral skepticism while avoiding the language of academic philosophy—or at least, the unnecessarily finicky aspects of it—with the hope of reaching a general audience. But to try to avoid not only the terminology, but large portions of the subject matter itself—the “views and conceptual distinctions that make academic discussions of human values so inaccessible”—is to commit oneself to providing an incomplete and highly distorted account of the subject.I pointed Jollimore's review out to Peter, but he was implacable. Instead of accepting the point and acknowledging his error, Peter insisted that Harris was just leaving out academic jargon that would make his ideas inaccessible. He accused me of being "more than careless" and said that my interpretation of Harris is "the least charitable one the text will (barely) support." When I pushed the point that Harris was not simply leaving out jargon, but dismissing arguments and ideas, Peter took issue with the word "dismiss" to the point of parody, insisting that Harris explained why he was not tackling so much of the literature in his book. But I never said Harris was dismissive without reason. The point is that Harris does, in fact, dismiss much of the literature, and that this has consequences for the value of his book. Perhaps it makes his book more accessible, but that does not invalidate Pigliucci's and Jollimore's criticism.
I was also getting fed up with Peter's tone. I took offense at the accusation that I was being careless, but Peter refused to apologize. He said he was just making a "factual statement" about my behavior. It appeared that I didn't "care enough" about the discussion. Yes, calling somebody a delusional clown is making a factual statement, too, but clearly there's room to disagree about whether or not that is appropriate. Instead of an apology, I received only condescension from Peter and thus ended the discussion.
After witnessing another participant arguing in circles with Peter, whose argumentative strategies and philosophical acumen were consistently poor, I then made the comment about having wasted our time on a clown. It was a fair assessment, I think.
Sunday, August 14, 2011
I won't be blogging so much in the coming months, since the school year begins in little more than a fortnight. I'm starting with King Lear, and this marks my first pedagogical venture into Shakespearean territory. To prepare, I'm reading a number of canonical texts from the Early Modern period and which I ashamedly admit I have not before read, including Bacon, Montaigne, and Machiavelli. Today, I looked at Martin Luther's famous De Servo Arbitrio ("The Bondage of the Will"), an impassioned rejoinder to Desiderius Erasmus. (Roughly, Luther's position was that mankind is not free to choose good or evil, but is determined to do so by divine providence. Erasmus believed that the question of freewill was itself unnecessary.) Without analyzing the philosophical or theological issues, I just want to draw attention to Luther's rhetoric (in the first six sections of De Servo), which is pretty extraordinary. It betrays a desire to not simply counter Erasmus, but to pummel him into desperate submission.
Luther begins by responding to Erasmus's assertion that Erasmus is "so far from delighting in assertions, that [he] would rather at once go over to the sentiments of the skeptics, if the inviolable authority of the Holy Scriptures, and the decrees of the church, would permit [him]." The issue, then, is about assertions, whether we should delight in them, and how they relate to the authority of Scripture and the decrees of the church.
Luther immediately admits a need to bite his tongue: "I consider, (as in courtesy bound,) that these things are asserted by you from a benevolent mind, as being a lover of peace. But if any one else had asserted them, I should, perhaps, have attacked him in my accustomed manner." So Luther is giving us a restrained criticism, one deserving of a good-hearted opponent, and not as he is apparently wont to give someone of less worthy metal. And yet, Luther makes clear, Erasmus has touched a nerve.
Luther then proceeds to say that a Christian would not make the argument Erasmus has made: "What Christian would bear that assertions should be contemned? This would be at once to deny all piety and religion together; or to assert, that religion, piety, and every doctrine, is nothing at all." He makes the same point a little later, in equally forceful language: "As though you could have so very great a reverence for the Scriptures and the church, when at the same time you signify, that you wish you had the liberty of being a Sceptic! What Christian would talk in this way?"
Luther says Erasmus presents a position both "absurd" and "impious." Yet, Luther soon reminds us that he is holding back his true feelings: "What I should cut at here, I believe, my friend Erasmus, you know very well. But, as I said before, I will not openly express myself." Later, he says that Erasmus has put forward statements which are "without Christ, without the Spirit, and more cold than ice." Even better: "What shall I say here, Erasmus? To me, you breathe out nothing but Lucian, and draw in the gorging surfeit of Epicurus. If you consider this subject 'not necessary' to Christians, away, I pray you, out of the field; I have nothing to do with you." Shortly thereafter: "And it is difficult to attribute this to your ignorance, because you are now old, have been conversant with Christians, and have long studied the Sacred Writings: therefore you leave no room for my excusing you, or having a good thought concerning you."
Luther presents a complete and utter rejection of Erasmus, not only as an intellect, but as a person, and such is ever magnified by the assertion that Luther is holding back! I gotta say, philosophy and theology aside, that's some awesome writing. And it sorta puts recent debates about religion, atheism and rhetoric into perspective.
Luther is also known for writing harshly against the Jewish people, where the power of his rhetoric is just as evident, even in the opening lines:
I had made up my mind to write no more either about the Jews or against them. But since I learned that those miserable and accursed people do not cease to lure to themselves even us, that is, the Christians, I have published this little book, so that I might be found among those who opposed such poisonous activities of the Jews and who warned the Christians to be on their guard against them.
As terrible as his words are, and as damaging as they were and have been, I cannot help but admire Luther's ability to wield language to his ends. Can we not admire the art even when it is designed to such ill effects?
Sunday, August 7, 2011
Philosopher Alva Noe is discussing gender and neurobiology on the NPR blog (see here and here). His primary goal is noble. He wants to counter the cultural forces which stereotype men and women and which, in some cases, lead us to devalue ourselves and sabotage our own potential. He appeals to a work of popular science by Cordelia Fine, Delusions of Gender, which he says draws the following conclusion: "Whatever cognitive or personality differences there are between men and women cannot be attributed, except in a few isolated cases, to intrinsic biological or psychological differences between men and women, at least not in the current state of knowledge." I haven't read Fine's work, so I cannot say how well Noe has represented her conclusion. In any case, the conclusion he attributes to her is much weaker than the three claims Noe himself puts forward in the name of science.
First, Noe says the science clearly tells us that there are "no cognitively significant neurobiological differences between men and women." This is surely too strong, and not at all what Fine seems to conclude. It certainly isn't supported by any of the evidence Noe discusses.
"You are looking in the wrong place," Noe also says, "if you look to the brain for an understanding of what makes us different." This also goes against Fine's conclusion, which says that there are intrinsic biological or psychological differences which account for some personality differences between men and women. Indeed, considering just how obvious it is that neurological differences can cause behavioral differences, it is rather stunning to see anybody making such a flat claim as Noe's. Why suppose that neuroscience is incapable of furthering our understanding of gender differences? Noe makes no clear argument for this remarkably strong statement. (It was Noe's lack of argument that originally made me suspect a philosophical bias at play here. Perhaps Noe is showing some bias concerning the relationship between brains and behavior, though this is not so clear).
Finally, he suggests that biology alone cannot account for human differences, and that we must also take "ways of thinking" into account. Noe seems to think that ways of thinkingcannot be analyzed as biological phenomena. He says what makes us different is our own behavior, our ways of identifying ourselves in such ways. But why suppose this sort of activity is not biologically determined? Why suppose that ways of thinking are not behavioral dispositions like any other? Why suppose they are not a matter of biological development?
Noe suggests a false dichotomy between nature and nurture, between biology and culture. Cultural differences, however historical and arbitrary, can also be biological differences.
Let's look at the details of Noe's analysis to see how this all plays out. Following Fine, Noe surveys various studies showing that adopting a persona can have significant effects on our performance. If you think of yourself as a cheerleader, you will likely do less well on certain sorts of tasks than if you think of yourself as a professor. Other research suggests that female students at a private college do worse when they are primed to think of themselves as women than when they are primed to think of themselves as private college students. Men, in contrast, do better when primed to think of themselves as men.
Perhaps when we think of ourselves according to particular stereotypes, we prime our brains to act in certain ways. We might alter the structure of our cognitive capacities just by identifying ourselves one way rather than another. Noe does not go into the neurobiological aspects of this phenomenon, though it presumably has a neurological basis. Noe focuses instead on the relationship between behavior and society. It is because we think of ourselves in certain ways that we act in certain ways. The categories we use to define ourselves--man, woman, heterosexual, homosexual, cheerleader and professor--only exist by virtue of a "matrix of ideas," he says, which is reinforced by cultural institutions.
Of course, Noe says, we can distinguish between thick and thin categories. For example, we can identify homosexual behavior in various times (and even species) which do not have the requisite discourse for thick homosexuality. Similarly, we need not postulate a matrix of ideas to identify or account for instances of masculinity and femininity throughout the animal kingdom. But, Noe insists, there are also thick personality differences: behavioral differences which are based on ways of thinking, and not instinct.
While I do not challenge the thick/thin distinction, I do question Noe's interpretation of it. Surely we would be wrong to ignore culture and discourse when trying to understand human behavior, including gender. However, it is an old misconception to suppose that culture begins where biology ends. William James made the point well when he said human behavior is distinguished, not by an absence of instincts, but by a warring abundance of them. Yet, Noe concludes, "If biology is the measure of all things, then many of the categories we use to group ourselves into kinds of persons . . . are, in fact, unreal." Apparently, Noe thinks cultural differences exist in some realm beyond the limits of biological analysis. This looks more like a philosophical bias than a scientifically supported conclusion. The thick/thin distinction does not entail that thick personality differences are less biologically determined than thin ones. Not unless you assume that culture is not a matter of biological development.
Noe's own views aside, I want to make two brief remarks about Fine's conclusion (as Noe states it), because it looks potentially misleading. Again, it is: "Whatever cognitive or personality differences there are between men and women cannot be attributed, except in a few isolated cases, to intrinsic biological or psychological differences between men and women, at least not in the current state of knowledge." First, there is the mention of "intrinsic" differences, which are presumably differences which are not influenced by culture or other environmental factors. Since human biological development involves culture and other environmental influences, it is not clear what makes a biological difference intrinsic. Perhaps Fine is talking about genetic differences, though, as biologists often point out, even genetic differences are expressed through biological development, and so are sensitive to the environment. So it's not clear how cleanly Fine is distinguishing between intrinsic and extrinsic biological differences, and this makes her conclusion a little suspicious. Second, even if there are few "intrinsic" differences (whatever that means), there can be many "extrinsic" biological differences. So, while a certain, limited view of biological influence may not account for a great deal of the gender differences we perceive, a broader view of biological difference might do the job.
The research shows that the way we think about ourselves can have unintended consequences for the way we act, and even on our very ability to perform. This is not exactly surprising, nor is it very new, though the particular studies Noe discusses are indeed interesting. However, I see no evidence that there are no (or even few) cognitively significant neurobiological differences between men and women, nor that there are anysalient behavioral/personality differences between men and women which cannot be traced to biology. Noe's views do not have the weight of science behind them, but rather suggest a naive view of the relationship between biology and culture.
Wednesday, August 3, 2011
Putting together a few thoughts on free will which I've been toying with lately, I've come to consider two possible views of libertarian free will. One might be possible, I think, but the other doesn't seem to work. (I should mention at the outset that I'm not considering any version of metaphysical libertarianism that postulates supernatural entities.)
First, some general observations about free will and what it means to make a decision.
To be an alternative is to be represented as an alternative. To have an option is to have a representation of something as an option. Whether or not that representation corresponds to a physical possibility, or whether or not the choosing of that option is physically possible, is irrelevant. Viable options must be logically possible, not physically possible. All that matters for people to have options is for them to have processes of a particular sort which regard options as such.
If somebody snaps their finger to my left, and I turn my head to look, I have not necessarily made a choice. I only chose to turn my head if part of my behavior involved regarding turning my head as an option. This need not have been a conscious act. I don't see the harm in supposing that we can make unconscious choices. However, I think it is evident that we do make conscious choices. Sometimes turning our heads to look at something is a conscious choice, and sometimes it isn't. To say it is a conscious choice is only to postulate a certain amount of reflective awareness (or control) over the deliberating process.
When we make a choice, we utilize a certain sort of process which can be wholly deterministic. However, I'm not sure it has to be deterministic for our choices to be valuable to us.
Libertarian free will is, on some accounts, the ability to make choices which are in line with our beliefs and desires, but which are not determined by our causal histories. This sort of free will is commonly rejected on the grounds that, if the choice is not based on our causal histories, then it cannot be based on our beliefs and desires--it can't be our choice.
Yet, if the generation of options (which, remember, are options by virtue of the fact that they are represented as options in our decision-making processes) utilizes a purely random process (such as quantum physics might allow), then we might have a decision-making process that uses our beliefs/desires to choose between alternatives which are not fully determined by our causal histories. I wouldn't assume that we have any such random-option generators, but I see no reason to discount the possibility. So this sort of libertarian free will may be worth having.
Interestingly, even if we don't have this sort of free will now, we might be able to develop it. Imagine a computer which, given a particular problem, produced possible solutions by utilizing a purely random process. If we acted on an option generated by the computer, then we will have, in a very real sense, chosen an alternative that was not determined by our causal histories, and yet which was based on our beliefs and desires. This sort of libertarian free will is thus a theoretical possibility for our futures, even if it is not a fact about our present.
What must be clear, however, is that we don't need libertarian free will to make decisions. What makes a behavior a decision is the choosing between alternatives. It doesn't matter how the alternatives were generated.
The final point is that we don't need libertarian free will to have genuine responsibility. Freedom from causal history cannot make us more responsible for our actions. At least, not in a way that matters. When we judge people by their decisions, our judgment does not depend on their alternatives having been detached from their causal histories. The judgment is about what they chose to do, and the fact that they had other options to choose from. If I'm guilty of X-ing and not Y-ing or Z-ing, then it doesn't matter if I represented any of those options via some quantum indeterminacy. I'm guilty because I intentionally X-ed when I shouldn't have. Maybe I represented X, Y and Z to myself thanks to a quantum generator. Maybe I didn't. It doesn't matter.
Consider the paradox that would result if we took the opposite view, and said that we can only be responsible for choices if we were not determined to have them as choices. In that case, I could not be responsible for using quantum indeterminacy. But, then, how could I be responsible for the outcome of my use of quantum indeterminacy, if I wasn't responsible for the use of it in the first place? I certainly can't be responsible for the outcome of the randomizing process alone, since that was random. And yet I can't be held responsible for the utilization of the process. So there is nothing left for me to be responsible for.
We are responsible for our actions, not the processes which ultimately make them possible.
If libertarian free will is the idea that responsibility requires that we make decisions which are not determined by our causal histories, then I don't think it is a viable option. If, however, libertarian free will is just the idea that we can make choices in accordance with our beliefs and desires, and yet which were not determined by the past or the laws of nature, then I think it's a legitimate possibility. However, I don't see any legal or social matters riding on it. It's an interesting question, and it would be a great discovery about how our brains work, but it won't change how we think about responsibility.
Tuesday, August 2, 2011
I recently had a brief discussion about the non-existence of numbers with a friend and student of physics. I'm a skeptic when it comes to the existence of numbers. I think math is something we do, and that numerals and their corresponding words are tools without referent. There is no number 2, but only various roles filled by the numeral "2" and the word "two." I wouldn't even say the numeral "2" denotes a rule (or rules) for its use.
My friend agreed there was something obviously correct about my approach, but said that we still have to wonder about the correspondences. I didn't have time to respond, but the remark seemed to betray a common intuition about mathematics: that mathematical equations and theorems in some way correspond to facts. I don't think there's any reason to suppose this is true. Math does not correspond to anything, just like hammers don't correspond to anything. I don't think there's anything corresponding to the number two, for example. Nor must there be anything corresponding to the equation 2 +2 = 4, or to any of Peano's axioms, and so on.
What we might want to account for isn't correspondence so much as utility. Why does math work? The obvious answer is: because organisms have evolved to do certain things. Roughly speaking, math is the formalization and utilization of systems of quantification in the identification and deployment of patterns. Why does that work? Well, why does the heart work? This is something that we are able to do, for some evolutionary reason. There is a good answer, or set of answers, but not a philosophically puzzling one, even if we don't know all the details yet.
There might be another aspect of math that seems philosophically puzzling: that is, why does it seem like mathematical theorems are discovered, and not invented? In some sense, the theorems of mathematics seem to already be "out there." But where is "out there?"
One possible answer: As we develop our mathematical system (or systems), we constrain the possibilities for their development. A mathematical theorem is not simply invented out of nothing. So what is "out there" are the parameters of possible mathematical theorems determined by the mathematical system we are already using--or, perhaps even better: determined by our innate capacity for mathematical invention, which is often integrated with the constraints of our current mathematical system.
Monday, August 1, 2011
A thought just occured to me. If we are capable of utilizing a purely random process, such as quantum mechanics might offer, then we could presumably use it in our formulation of potential plans/intentions prior to conscious deliberation. The randomization need not be in the conscious selection of a course of action--in the making of a decision itself--but in the unconscious production of options. Thus, we might choose to do X (as opposed to Y and Z) according to our beliefs and desires, and yet we come to identify X, Y and Z as options--and, indeed, come to have them as options--by a process which was not completely determined by past events. This only requires neurological processes devoted to randomly generating and selecting possible intentions. So, in choosing X, we are choosing something that was not determined by our causal history, and we are still acting on our beliefs and desires. We still choose what our physiology, psychology, etc., determines is the best option.
So, if there is something like quantum indeterminacy, we could have evolved a way of utilizing it. And it therefore seems that we could act in accordance with our beliefs and desires while also having alternatives which were not determined by the past or by any physical laws.
Somebody else must have put forward an argument like this before. It seems too simple to have gone unnoticed. I haven't read much of the literature on free will, but still, I'm a little surprised I hadn't thought of this before.
I recently wrote about the consequence argument and suggested that it might involve a notion of power which we don't necessarily need in order to have an influence over the future. I just came up with a much better response to the argument, however, which does not require any fussing over the word "power."
The consequence argument is as follows: If we have no power over X, and X completely determines Y, then we have no power over Y. Since we have no power over the past or over the laws of nature, and the past and the laws of nature together completely determine the future, then we have no power over the future. Thus, free will and determinism are incompatible.
The problem with the argument seems to be that it regards agents as existing outside of the causal nexus comprising the past and the laws of nature. If we think of ourselves as part of the past and the laws of nature, and we accept that the past and the laws of nature determine the future, then we are part of what determines the future. To the extent that we are part of what determines the future, we have power over the future. This looks pretty obvious.
The problem is the desire to situate ourselves entirely in the present, as if there was a decisive break between the past and our present moment of reflection. On the one hand, when we reflect on ourselves as rational agents, we are reflecting on the past as well as the present. We might thus put it this way: The past and the laws of nature determine the future via the present. Therefore, what exists in the present determines the future.
Sunday, July 31, 2011
This month, physicist and popular science writer Sean Carroll weighs in on the everlasting debate about free will. He says it's "as real as baseball," which means that it is not the sort of thing that we would expect to find in a detailed physical description of the universe, but that we can't imagine trying to talk about humanity without accepting it as a real phenomenon.
I have to criticize Carroll for failing to explain what he means by the phrase "free will" (a phrase which, he explains, does not have an agreed upon meaning) and for failing to give us a reason to think it is as real as baseball. Carroll defends a pragmatic realism--a view that we should take as real whatever entities we benefit from postulating in a given language, regardless of whether or not we benefit from postulating them in our widest available language. So, we benefit from postulating the existence of baseball even though there is no need for it in the language of physics, and even though the language of physics has more predictive power than the language we use to talk about baseball. Similarly, he says, we can believe in free will even though it has no place (or need not have a place) in our most general and powerful languages. That's a fine point to make, if we had some reason to think that "free will" is emergent in the way that baseball is. Since Carroll hasn't told us what he means by "free will," how can we decide if it is as emergent (and thus as real) as baseball?
The next point I want to talk about is more about science, and not philosophy. Carroll talks about the possibility of having different levels of description, the microscopic (physics) and the macroscopic (emergent). Giving the example of time's arrow, he points out that the laws of physics are the same either forward or backward in time. The microscopic description of reality therefore does not distinguish between past and future--or, rather, the concepts of "past" and "future" are arbitrary when we are talking about physical laws. Yet, he points out, we clearly distinguish between the past and the future in a non-arbitrary way when we are talking about everyday life. The macroscopic world is irreversible. Carroll says the laws of physics don't account for this; so, to avoid contradiction when combining these two levels of description, we must add a new component to our discourse: the particular configuration of the universe. His claim is that, if we ignore the particular configuration of our universe, we end up with a time-reversible description of reality.
Maybe I should defer to the physicist here, but I have to pause and wonder: does this make sense? How does adding a description of the configuration of the universe make a difference? The second law of thermodynamics--the law which states that any isolated system will increase in overall entropy--is a fundamental law of physics and it is widely recognized as accounting for our common notion of time's irreversibility. So Carroll's discussion seems terribly wrong. What Carroll is saying is that the laws of physics are not consistent with the second law of thermodynamics, and that we must appeal to empirical facts about states of the universe in order to compensate for this discrepancy. It seems much more accurate to say that the laws of physics include the second law of thermodynamics, that the second law of thermodynamics cannot be deduced from the other laws of physics, and that it--like all the other fundamental laws--is supported by experimental evidence. So there is no conflict between the microscopic and macroscopic levels of description here.
True, quantum mechanics has suggested that the notion of temporal direction might not make sense on extremely small scales, but that is not what Carroll is talking about when he distinguishes the microscopic and the macroscopic. He's talking about two levels of describing the same reality: the level of emergent properties and the level of underlying physical laws. I think we can make such a distinction, but Carroll's discussion of the arrow of time seems to confuse the topic. Again, I'm no physicist, but I think he has said something quite wrong here, or at least misleading.
Now let's get back to philosophy. Carroll's curious discussion of levels of description leads him to a discussion of a well-known philosophical argument, called The Consequence Argument. The argument is as follows: If we do not have power over X, and X completely determines Y, then we do not have power over Y. Since we do not have power over the past or over the laws of nature, and (according to determinism) the past and the laws of nature together completely determine the future, then we do not have power over the future.
Carroll misrepresents the argument in a rather absurd way. He writes: "The consequence argument points out that the future . . . [is] determined by the present state just as surely as the past is." Does he really mean to say that the past and future are equally determined by the present? I doubt it, but it's not clear what he does mean to say.
In any case, Carroll says the consequence argument "mixes levels of description." The problem with the consequence argument is apparently just like the "problem" we have when we try to understand time's arrow by looking only at the laws of physics. Carroll continues: "If we know the exact quantum state of all of our atoms and forces, in principle Laplace’s Demon can predict our future. But we don’t know that, and we never will, and therefore who cares? What we are trying to do is to construct an effective understanding of human beings, not of electrons and nuclei."
It looks like Carroll has misunderstood the consequence argument. It does not depend on anybody being able to predict the future by looking at the present. It has nothing to do with electrons and nuclei, per se. There's no reason to think any levels of description have been mixed.
Carroll's argument aside, a "Who cares?" response to the consequence argument may be worth considering. Should we care if we are powerless to affect the future? Honestly, I don't see how we could not care. But there might be more here to consider. A fruitful discussion of this issue might focus on the notions of power and powerlessness. Perhaps we do have some power over the future, but not the sort that is implicated by the consequence argument. Maybe the consequence argument rejects a sort of power which is not worth having.
Update: I've just considered a different criticism of the consequence argument, and it can be found here. The upshot: Since what is determined by the past is part of what determines the future, and we are determined by the past, then we are part of what determines the future. So the consequence argument is not sound.
Over at Russell's blog, I was asked why I would call something an "alternative" if it was never physically possible. If this is a deterministic universe, how could anything ever rightly be called an alternative?
My answer: It is represented to us as an alternative which we evaluate according to (often flexible) standards. The process of deliberation may be completely deterministic, but there's plenty of evidence that such processes occur. They occur frequently in plain sight, in public discourse.
A comparison to natural selection might help. Darwin's use of the term "natural selection" might seem metaphorical, as if natural selection were fundamentally unlike artificial selection. I don't think that's the case.
Perhaps you think determinism means that there is neither natural nor artificial selection--that the term is inappropriate in a deterministic universe. I don't think that makes much sense. As I wrote in my last post, postulating an uncaused event would not make our decisions any more real. It would only make them utterly arbitrary.
Back to Darwin . . . Natural selection occurs when genotypes dominate their competitors in a population. They are differentially selected, which means that they dominate because they satisfy certain conditions better than their competitors. The same happens in artificial selection. The only difference is that, in artificial selection, the process has a new, unnecessary element: plans. The conditions which must be satisfied in artificial selection are part of breeding plans.
So, in both artificial and natural selection, the process can be completely deterministic, and yet the term "selection" has a precise and appropriate meaning, and this meaning is not so different from what we normally mean when we talk about decisions and choices. The main differences are that (1) in the latter case, we are selecting plans themselves, and not genotypes, and (2) the outcome of the process of selection is not the prevalence of a genotype in a population, but the adoption of an intention (represented plan of action) in human behavior.
Just as genotypes can be selected in a deterministic universe, so too can plans. We call the former sort of selection "speciation" and "breeding," and the latter sort "making a decision."
Russell Blackford has written an interesting post which has spawned an interesting discussion about free will. Russell's confused a few of his interlocutors and says he feels a little bit alone in his neck of the internets. Since I think I agree with his view of free will and the related discourse, I've decided to throw in my two cents.
To say we make a choice or a decision is merely to say that we adopt one plan among given alternatives. This does not imply that the alternatives were ever physically possible, nor does it imply that the decision could have been other than what it was. All it implies is that (1) there are representations of plans as options for future behavior, (2) one of those representations becomes an active part of our behavior (as an intention) and (3) the representation of options as such plays a causal role in the production of the intention (by satisfying some conditions which we normally think of as desires/needs). There need not be a "free" act which takes us from (1) to (2). There simply need be (1), (2) and (3). That's enough for there to be a decision/choice.
I think that's the sort of thing Russell has in mind. It fits our normal talk of decisions/choices and it doesn't require any indeterminacy in the universe. And I agree with Russell that any stipulation of an uncaused act which would presumably get us from (1) to (2) would not make our decisions any more real. There is no benefit (explanatory or otherwise) to postulating such an uncaused event. It would make our decisions free in a particular way, but it would also make them utterly arbitrary. We do have the ability to make more or less arbitrary decisions, but our sense of responsibility and accountability certainly does not depend on, and would not even slightly be enhanced by, an utterly arbitrary decision-making process.
When we say "I could have done otherwise," I suppose what we normally mean is that we did not feel strongly compelled to act one way rather than another. Or, if we did feel so compelled, we regard that compulsion as the result of a prior decision which was not compulsory. So we are admitting to a degree of weakness in the conditions which define our decision-making processes. This entails a degree of freedom with respect to a particular variety of causal influence--namely, freedom with respect to our own wants/needs. So maybe free will, in common terms, is just the ability to choose without compulsion--that is, without the feeling that we have to choose one option rather than another.
Friday, July 29, 2011
Not sure how far I'll go with this, but here's the first entry in my "Science Phiction" series. The point is to identify popular writing which mangles, misidentifies, or otherwise wrongly appropriates philosophical ideas or themes in the name of science. If I were going to award points, I'd award generously for writers who get both the science and the philosophy wrong. However, to qualify for entry, you only have to get the philosophy wrong.
First up: Astrophysicist Adam Frank gets the time-lag argument terribly wrong: Where is Now? The Paradox of the Present
Here's an excerpt:
When you look at the mountain peak 30 kilometers away you see it not as it exists now but as it existed a 1/10,000 of a second ago. The light fixture three meters above your head is seen not as it exists now but as it was a hundred millionth of a second ago. Gazing into your partner's eyes, you see her (or him) not for who they are but for who they were 10-10 of a second in the past. Yes, these numbers are small. Their implication, however, is vast.
We live, each of us, trapped in our own now.
The simple conclusions described above derive, in their way, from relativity theory and they seem to spell the death knell for a philosophical stance called Presentism. According to Presentism only the present moment has ontological validity. In other words: only the present truly exists; only the present is real.
Frank's argument is that everything we perceive has come to us from different parts of space and time. Our perceived "now" is therefore "at the mercy of many overlapping pasts."
This is illogical. If all of our perceptual experience comprises information from different pasts reaching us at the same time, as Frank suggests, then the now is not at the mercy of those pasts. The now is rather the point at which information from those pasts converges. So there is no reason to reject presentism, as Frank does. (Presentism, Frank explains, is the philosophical view that only the present exists.)
To Frank's credit, there is a coherent philosophical argument in the vicinity. It's called the time-lag argument, and it was first proposed by Bertrand Russell in the first half of the 20th century. However, it is not an argument against presentism, but rather against direct (or "naive") realism. The logical conclusion of the argument is that the content of perceptual experience is not the world as it is, but the world as it was.
Frank also errs in supposing that his argument (or the related time-lag argument) relies on relativity theory. He is presumably referring to Einstein's theory of Special Relativity, which relies on the experimentally observed fact that light travels at the same speed relative to every observer regardless of their inertial frame. This fact, however, and the consequences Einstein draws from it, are not required for the sort of argument Frank is trying to make.
In line with the time-lag argument, Frank stipulates that light travels at a finite speed--though I don't see why the time-lag argument needs such a stipulation anyway. All the argument requires is that the information we receive as input (such as the touch of silk against our fingers) takes time to reach our perceptual processors. Of course, it's a well-established fact that light travels at a finite speed, and I don't suppose that is something we should ignore. The time-lag argument should take the speed of light into account, but it is not necessary for the argument as such.
There is much to be learned from the time-lag argument and the many objections which have been made against it. One objection is that the argument relies on a dubious ontological distinction between the perceiving self and the world. Many philosophers reject the idea of a distinct, inner observer which passively receives information from the senses. The very idea of sense-data, which Russell favored, has been widely criticized for decades. Maybe we shouldn't think of the content of perceptual experience as arriving at our minds through our senses. The content of experience might not be so easy to identify, if we are justified in postulating it in the first place.
Another criticism of the time-lag argument is that it ignores what we know about quantum physics: on the one hand, there may, theoretically, be particles (e.g., tachyons) which travel faster than the speed of light; on the other hand, some interpretations of quantum mechanics have it that information can travel instantaneously. On very small scales, causality might actually occur backwards in time. The very notion of temporal direction might only make sense with relatively large scales. These points do not force us to reject the time-lag argument, but they do make it less persuasive.
The question remains: Does contemporary science support or conflict with presentism? On the one hand, special relativity says that there is no privileged observer, no unique inertial frame, which defines the "now" of the universe. On that view, it makes little sense to say that the present is all that exists, since there is no particular present for the universe as a whole. On the other hand, relativity theory also regards time as a variable. Many physicists imagine a block universe, where all events at all times exist "side by side," so to speak. Our perception of time is therefore something of an illusion. This suggests that the present is not all that exists. Yet, we might prefer to say that everything exists in the present, but we are physiologically limited in our ability to regard the present in all its complex glory. The bottom line: There's plenty of room for debate.
Saturday, July 9, 2011
I've written another paper for another graduate "course," entitled "Theories of Linguistic Communication." It's not as strong or thorough as I'd like it to be, and it's a bit disorganized, but I don't have more time to work on it. I'm not sure how committed I am to the views I'm expressing here, either. How's that for a disclaimer? In a nutshell, my arguments and views are still in development, but hopefully this short essay will be of some interest.
Rules, Acts and Interpretation:
A Wittgensteinian View of Linguistic Communication
According to the standard view of linguistic communication, to know a language is to know a set of rules which allows one to deduce the truth-conditions of any well-formed sentence in that language (Recanati 2002, p. 105). These rules are semantic, which means they relate sentences to the propositions they express. Such rules may be context-sensitive: some linguistic entities may even “wear their context-dependent nature on their sleeve,” as Jason Stanley puts it (Stanley 2002, p. 150). Kent Bach thus distinguishes between wide and narrow context: there is “a short list of variables, such as the identity of the speaker and the hearer and the time and place of an utterance” which combines “with linguistic information to determine content (in the sense of fixing it)" (Bach 1997, quoted in Recanati 2002, pp. 110-111). This list comprises the narrow pragmatic context, the identification of which is recognized as a part of semantic interpretation, because the goal is not the evaluation of speaker intentions or beliefs, but the recovery of the truth-conditional content expressed by a sentence. When the goal of interpretation is the speaker’s intentions and beliefs, a different process of interpretation must occur, one which takes into account the wide context—in other words, one which is sensitive to any and every possible fact. There is no set of linguistic rules which limits the number, type or range of entities which can influence our interpretation of speaker intentions and beliefs. Speaker meaning is thus distinguished from sentence meaning: the former requires pragmatic processes of interpretation while the latter requires semantic processes. Semantic processes are distinguished both by their reliance on linguistic rules and their goal of identifying the values of lexical items. A composed list of such values is called an explicated proposition, or what is said by an utterance. Pragmatic processes may result in the construction of another proposition, one which is implicated by an utterance. Pragmatic processes do not tell us what a sentence means, but only what a speaker means when they use a sentence. Speaker meaning and sentence meaning may often be identical, but in many cases diverge.
Since speaker meaning and sentence meaning often diverge, it would be misleading to suppose that one knows a language if one does not know the conventional uses of the language which account for this divergence. To fully speak and understand a given language, one must be privy to the idioms which allow language users to easily recognize when words are being used to mean something other than what they say. Furthermore, this divergence cannot be explained by semantic rules and is not identified by means of semantic interpretation—indeed, that is what distinguishes implicature as such. If it could be determined by semantic interpretation, it would not be an implicature. Thus, knowing a language cannot be simply a matter of knowing the semantic rules of deduction and the syntactic rules of combination. Rather, knowing a language must also include knowing the conventional rules which govern implicatures. If these rules are sensitive to any and every possible fact, however, it is not clear how determinate they can be. Unlike semantic interpretation, pragmatic interpretation is in some fundamental sense indeterminate.
The question I have so far been discussing concerns the nature and extent of the rules governing linguistic communication. At this point, I want to raise and address a few related questions which have divided contemporary theorists.
1. What is the relationship between semantic and pragmatic interpretation? According to the standard view, they are distinct processes and semantic interpretation is primary in normal linguistic communication. Pragmatic interpretation is not always necessary; it is only vital when speaker meaning diverges from sentence meaning. According to Recanati (2002), however, semantic interpretation is always dependent upon pragmatic interpretation: the fixing of the narrow context can only be done by appealing to the wide context. Semantic rules are not sufficient to determine what is said. Then there are relevance theorists, who claim that semantics and pragmatics are combined in a single process of interpretation, that the identification of speaker meaning occurs through the same process as the interpretation of sentence meaning, whether or not these meanings are identical or divergent (Carston 2002). Recanati agrees with the relevance theoretic view that no propositional content is recovered prior to pragmatic interpretation, though he still distinguishes between semantic and pragmatic processes of interpretation. In his view, the distinction rests on the conscious availability of premises about speaker intentions. Recanati claims there are pragmatic processes (secondary pragmatic processes, to be exact) which take beliefs about speaker intentions and beliefs as arguments in inferential processes concerning the intended meanings of utterances. Thus, when Recanati claims that pragmatic interpretation is always involved in linguistic communication, he means that primary pragmatic interpretation is always involved; he does not suppose that normal communication requires inferences about speaker meaning and intention.
2. Is semantic interpretation constrained by syntax? According to Stanley (2002), it is: “all the constituents of the propositions hearers would intuitively believe to be expressed by utterances are the result of assigning values to the elements of the sentence uttered, and combining them in accord with its structure" (Stanley 2002, pp. 150-151). According to relevance theorists, however, there are elements of what is said which cannot be determined by syntax and semantics alone.
3. Does semantic interpretation rely on inferences about speaker intentions and beliefs? According to inferentialists, semantic interpretation is an inferential process which relies on premises about the intentions and beliefs of the speaker. According to anti-inferentialists, no such inferences normally occur in linguistic communication. What is said, as opposed to what is meant, can be recovered without inferences from speaker intentions and beliefs. Linguistic communication is, in normal cases, as direct as perception.
Relevance theorists do not claim that linguistic interpretation rests on inferences about speaker beliefs and intentions. Rather, they claim that interpretation produces judgments about such beliefs and intentions as output. They deny that implicatures require prior outputs about sentence meaning as arguments in an inferential process of interpretation (what Recanati calls “secondary pragmatic processes.”) Recanati also rejects inferentialism, claiming that normal linguistic communication requires only primary pragmatic processes, and that no inferences of any kind need occur at all.
Carston (2002), a relevance theorist, claims that, because of the “highly context-sensitive nature of sense selection and reference assignment . . . they are matters of speaker meaning, not determinable by any linguistic rule or procedure for mapping a linguistic element to a contextual value, and so just as dependent on pragmatic principles as the process of implicature derivation” (Carston 2002, p. 134). Recanati observes this same context-sensitivity, which he calls “semantic underdetermination.” According to this thesis, the values of lexical items can only be determined by appealing to the wide context of utterance. The notion of wide context is distinguished by the fact that it is indeterminate: any fact at all might enter the wide context and influence the interpretation of an utterance. Thus, the correct interpretation of an utterance requires pragmatic interpretation: it cannot rely on semantic rules alone.
As noted above, Recanati distinguishes between two types of pragmatic interpretation: primary and secondary. Primary pragmatic processes are not determinable by linguistic rules and procedures for mapping lexical items to their values. Primary pragmatic processes do identify what a speaker means, and not simply what a sentence says by itself. Sentences do not, on Recanati’s view, say anything by themselves. Yet, Recanati observes, primary pragmatic processes do not rest on inferences about what a speaker means. They need not involve any inferences at all.
Carston goes further. Instead of relying on the relatively uncontroversial claim that semantic interpretation is deeply sensitive to context, she claims that semantic interpretation is not normally constrained by syntax:
There is a wide range of cases where it seems that pragmatics contributes a component to the explicitly communicated content of an utterance although there is no linguistic element indicating that such an element is required. That is, there is no overt indexical, nor is there any compelling reason to suppose there is a covert element in the logical form of the utterance, and yet a contextually supplied constituent appears in the explicature (Carston 2002, p. 135).
These constituents are what Stanley calls “unarticulated elements.” Stanley’s method of refuting the existence of such elements is to identify, on a case by case basis, hidden (but articulated) lexical items and so account for the explicature (aka “what is said”) in terms of the logical form (syntactical structure) of the uttered sentence.
Stanley’s methodological suggestion is that “an unpronounced element exists in the structure of a sentence just in case there is a behavior that would be easily explicable on the assumption that it is there, and difficult to explain otherwise” (Stanley 2002, p. 152). As an illustration of the application of this principle, Stanley presents an argument for the existence of unpronounced by-phrases in passive constructions, such as “The ship was sunk to collect the insurance.” The by-phrase, he says, must be present as a local controller of the unpronounced pronominal element, ‘PRO,’ which is supposedly the subject of infinitival clauses. How does Stanley know that there is an unpronounced pronominal element which must be controlled? Because, he says, sentences like “The ship sank to collect the insurance” are ungrammatical. Ships are not the sorts of things that can collect insurance. For Stanley, to say that a ship is not the sort of thing that can collect insurance is to say that “the ship” cannot control the pronominal element in “The ship sank [PRO to collect the insurance].”
Pace Stanley, it is not obvious that “the ship sank to collect the insurance” is ungrammatical. It is unusual, perhaps, and this is because we usually do not attribute to ships the ability to collect insurance. The only obvious fact is that collecting insurance is an intentional activity, and ships are not intentional agents. So we reject the content of the sentence, but not its grammatical form. If the sentence is grammatical, however, there is no need to postulate an unpronounced pronominal element. It does not seem easier to stipulate an unpronounced pronominal element, and so Stanley’s criterion is not satisfied.
I take the preceding argument as grounds for rejecting Stanley’s example, though it is worth exploring Stanley’s full application of his proposed criterion. Stanley’s argument is that the alleged unpronounced pronominal element in passive constructions (such as “The ship was sunk to collect the insurance”) is controlled by unpronounced by-phrases. Stanley concludes that all passive constructions must include such unpronounced by-phrases. For example, I cannot say “The student was well-informed” without saying who or what informed the student. I doubt Stanley would claim that I necessarily know who or what informed the student. Rather, for the sentence to be true, somebody or something must have informed the student well, even if I don’t know who or what it is, and that somebody or something is what enters into the proposition.
My objection is that it need not be the case that a determinate somebody or something is responsible for a student being well-informed. Arguably, we sometimes use the passive voice in such cases where no determinate element exists. So it is implausible to suppose that a by-phrase must be saturated for such sentences to have meaning. Stanley might be better off saying that there is an unpronounced by-phrase only in case it is required to control the alleged unpronounced pronominal element. Yet, we lack sufficient grounds for supposing any such element exists, and it would be circular to argue for their existence by appealing to them. True, it does seem that “The ship was sunk to collect the insurance” is true only in case there is some value for the entity which intended to collect the insurance by sinking the ship, even if I do not know who or what that entity is. Yet, we can claim that the by-phrase is implicated by the sentence and not a part of its syntactical form. The existence of an agent is implied—it is a logically necessary consequence of the sentence meaning—without being an articulated element of the sentence.
It is not obvious that there are ever cases in which the only, or even the easiest, way to explain linguistic behavior is to postulate unpronounced lexical items. Stanley’s project appears to have an insurmountable methodological difficulty. In any case, this does not necessarily support Carston’s position that the logical form of an explicature underdetermines its meaning. That point ultimately depends on how we understand logical form.
I propose a distinction between the syntax of a sentence and its logical form. If the meaning of a sentence has logical entailments, then we might suppose those entailments are a part of its logical form. Indeed, there does not seem to be any principled way of distinguishing between what a sentence means and what is logically entailed by that sentence. This was evident in the case of “The ship was sunk to collect the insurance.” In so far as that sentence logically entails that somebody intended to collect insurance by sinking the ship, then we might suppose that agent is part of the proposition expressed by the sentence. Certainly, if no such agent exists, the sentence is false. Is it false because of the content expressed by its logical form or is it false because of a proposition which is logically entailed by its logical form? I do not suppose there could be any principled way of deciding this question. There is no methodological basis for distinguishing between meaning and logical entailment. If we take the proposition expressed by a sentence to have a logical form (as Carston and Stanley agree must be the case), then we have no reason to reject any logical entailments as being distinct from the proposition so expressed.
Stanley and Carston both equate the logical form of an explicature with the syntactical construction of its linguistic elements. Thus, while Stanley claims that logical form constrains the proposition expressed by a sentence, Carston claims that it does not. Yet, I propose that we instead take the logical form of an explicature to be the logical form of the proposition it expresses. Logical form is not determined by the sentence alone, but rather by the speaker’s intentions and beliefs as they are manifested in the use of a sentence.
Stanley claims that the logical form (taken as identical to the syntactical structure of a sentence) expressed by a sentence is determined by the speaker’s linguistic intentions (Stanley 2002, p. 150). However, according to Stanley, speaker intentions are constrained by the rules of the language. Speaker intentions determine logical form in so far as a speaker intends to utilize such-and-such properties of a language. It is because a speaker intends to assert X that the sentence uttered has the logical form of X. Another speaker might utter the same words without the intent to make an assertion, and so would not express a proposition at all. What distinguishes an assertion from the mere production of sounds or markings is an intention to make an assertion. I suppose that this, for Stanley, is why the logical form is determined by speaker intentions: the speaker intends to follow the rules of the language.
The crucial question, then, is whether or not the intended sentence contains elements corresponding to all of the elements of the proposition it expresses. Does syntax determine logical form? If we take a sentence to be more than what is physically produced—more than an assortment of sounds or markings—but as a combination of elements which can be used to express a proposition, then the sentence may be said to have a logical form—to be a token of a logical type. What makes it a token is its logical form, and this seems to be what we mean when we talk about syntax. And if the logical form is identical to its logical entailments, then the sentence contains everything that is logically entailed by the utterance.
Yet, there must be some constraint on what can be considered a logical entailment. For what counts as logical entailment in ordinary discourse depends on assumptions about relational properties. What a person logically entails is a matter of what that person believes about the world. These beliefs cannot be deduced from logic or language alone. So it appears that logical entailment is really just a matter of intended entailment. The proposition expressed by an utterance cannot be deduced by the rules of logic and language alone, but must follow from the speaker’s beliefs and intentions. This is not to say that the process must take speaker beliefs and intentions as argument. It is only to say that some accordance with speaker intentions is required, and this accordance cannot be produced by strict adherence to the rules of logic or language. In so far as logical form is a matter of logical entailment, it cannot be a matter of syntax alone.
We now have a picture of logical form which is not limited by the syntactical properties of a sentence, but by the truth-evaluable properties of what a speaker intends a sentence to say. Sentence meaning is always intended sentence meaning, which is still distinguishable from speaker meaning—not because the latter alone is uniquely intended by the speaker, but because only the former incorporates syntactical features of the language. There are no principled, rule-based criteria for assigning logical forms to sentences. Syntax must play a part in sentence meaning, but it does not completely determine its logical form.If the question is “What constitutes sentence meaning?”, then we seem to end up with a dilemma: we cannot identify any proper constituents of sentence meaning, because we cannot identify the essential properties of a sentence apart from its syntactical components, and these are not sufficient to determine logical form. Perhaps the question should not be What constitutes sentence meaning?, but rather, How are notions of sentence meaning constructed? This question is a question about how interpreters of speech acts go about creating notions such as what is said and what is meant. If we approach the topic this way, we may regard the notion of sentence meaning as the result of interpretation. Of course, we still want to be able to say things such as, “That is not what I meant.” We want to be able to test interpretations against intended meanings. Yet, we cannot do so by appeal to facts about particular sentences. We can only do so by fixing sentence meanings as interpreters of our own discourse. The notion of sentence meaning, then, is not determined by speaker intentions; rather, speaker intentions and sentence meaning are co-determined by the interpretation of utterances.
Sentences have meaning in so far as there are interpreters who interpret them. And it is permitted to postulate a sentence meaning in so far as our interpretation of certain sorts of behavior requires it. So, if there are linguistic rules which determine the logical form of sentences, these rules must exist as elements of interpretation. There are no facts about what a sentence means—or even about whether or not a sentence has been produced—prior to the interpretation of a speech act as such.
If we are going to investigate linguistic communication, then, we should focus on the processes of interpretation. This is not to say that speech production is irrelevant. Rather, it is to say that speech production cannot be identified unless we understand what it means to interpret speech. We might even define speech production as the intentional triggering of processes of linguistic interpretation. Whatever intentionally triggers processes of linguistic interpretation is, by definition, a speech act. Thus, there can be no study of speech acts apart from the study of interpretation.
The view I am promoting owes some gratitude to Wittgenstein and his rule-following paradox. In the controversial but influential formulation proposed by Kripke (1982), the dilemma takes the following shape: there can be no facts about the intended meaning of an expression which determine any future uses of that expression. According to Kripke, Wittgenstein held that there are no facts about intended meaning at all. I am not convinced this is correct. I believe we can read Wittgenstein as denying that there are any facts about intended meaning which logically necessitate future uses of expressions. However, I do not think Wittgenstein thereby supposes that there are no facts about intended meaning at all. Wittgenstein is only rejecting the ability of any such facts to constitute a logical foundation for our metalinguistic discourse.
If we attempt to determine what rule we are following by our use of a particular expression, we end up with an infinite regress: we can only appeal to other rules or other expressions of the same rule, and so never arrive at a final interpretation of the rule we are intending to follow. Wittgenstein concludes: “there is a way of grasping a rule which is not an interpretation, but which is exhibited in what we call ‘obeying the rule’ and ‘going against it’ in actual cases” (Wittgenstein 1953/2001, section 201, p. 69) In other words, processes of semantic and pragmatic interpretation cannot themselves be wholly circumscribed by rules. The indeterminacy of pragmatic interpretation was noted at the outset, when it was observed that sensitivity to every possible fact indicates a fundamental lack of linguistic control. The Wittgensteinian view of logical form should not be surprising, then, once it has been acknowledged that semantic interpretation relies on the wide context.
According to inferentialists, you cannot understand an utterance without inferring from beliefs about the speaker’s own beliefs and desires. To correctly interpret an expression, you must first interpret the speaker’s beliefs and then deduce the correct interpretation of their utterance. Anti-inferentialists, on the other hand, claim that the correct interpretation of an utterance does not depend on any inferences from speaker beliefs, but can be determined solely by knowledge of the language and determinate features of the context of utterance. My view is perhaps more sympathetic to anti-inferentialism. We cannot identify that any of a speaker’s beliefs and desires are entailed by her utterance without first interpreting that utterance. We cannot infer from the beliefs and desires to sentence meaning when that very meaning is what leads us to stipulate beliefs and desires. It appears that both the beliefs/desires we attribute to a speaker and the meaning we attribute to her utterance are interpreted together; we can speak of what a speaker says only because we can speak of her relevant beliefs and desires, and we can speak of the latter only because we can speak of the former. Our ability to interpret speech must depend on our ability to interpret personal qualities. The interpretation of speech is an interpretation of personal qualities. While the personal qualities might not enter the process of interpretation as premises, they are represented in judgments about that speech. Still, this process need not be inferential.
I am also sympathetic to Carston’s relevance theoretic approach. I see no reason to reject her position that semantic and pragmatic interpretation (even Recanati’s “secondary” pragmatic interpretation) cannot occur in a single process of interpretation. There does not seem to be a principled method for distinguishing between explicature and implicature. No set of conventions could decide ahead of time whether or not a particular speech act was conventional or unconventional, and whether or not an entailment was explicated or implicated. This is not to deny that some meanings are not more implicit than others, or that some are not less conventional than others, nor is it to deny that there is any such thing as “literal meaning.” It is rather to make the Wittgensteinian point that these notions are not wholly circumscribed by rules, and that there is no criteria to satisfy prior to acts of interpretation.
Recanati’s distinction between primary and secondary pragmatic processes relies on the notion of conscious availability. Supposedly, when an utterance defies normal interpretation, secondary pragmatic processes kick in and, using the sentence meaning as argument, infer possible speaker meanings. But the notion of conscious availability seems too weak to do much work here. We do not always need to take sentence meanings as argument when we identify implicatures. The identification of an implicature can be as automatic as any act of interpretation.
There is a temptation to say that some uses of a linguistic construction are simply incorrect or unconventional. However, we necessarily lack the means of determining what distinguishes the conventional from the unconventional, or the correct from the incorrect, in advance of particular cases. In other words, whether or not we should regard any given use of a linguistic construction as “correct” or “conventional” (or, conversely, as “incorrect” or “unconventional”) is theoretically undecidable.
This Wittgensteinian view is not opposed to analyzing content in terms of logical form. However, it is opposed to the notion that we could define logical form in advance of our analysis of content. Thus, when we interpret a speech act, we do not first identify the complete logical form and then apply rules to identify the content. We rather identify the logical form by virtue of our understanding of the content.
Theorists like Jason Stanley will always be able to revert to hypotheses about hidden lexical elements in attempts to defend the view that “every feature” of the communicated content “must be the semantic value of something” in the syntactical form “or introduced via a context-independent construction rule” (Stanley 2000, quoted in Stanley 2007, p. 36). Yet, since there is no way to determine the correct analysis of logical form in advance of our treatment of particular cases, Stanley’s strategy is suspect. It will always be possible to stipulate some hidden elements to account for the particular use of a linguistic construction. This does not mean that logical form somehow preceded that particular use. For Stanley’s method to work, we need an independent indication of logical form apart from actual speech acts—a purely semantic and syntactic theory—against which we can test our empirical observations. Yet, Stanley seems open to the possibility that any such theory will be open to possible revision when new pragmatic evidence is revealed.
What is the point of supposing that logical form precedes speech acts? Presumably, those who favor such a view suppose that correct linguistic interpretation requires adherence to rules for correctness, and such rules require some kind of linguistic discipline: Communication must be “under linguistic control,” a Stanley puts it (Stanley 2007, p. 10). From the Wittgensteinian view, this is not even wishful thinking: it is simply untenable. Linguistic control does play a role in communication, but it is not foundational. Linguistic control is the result of acts of linguistic communication, and so cannot be a condition of such acts.
The view I am promoting is not that there is no role for rules in linguistic communication. Rather, it is that this role is not foundational—or, rather, that the foundational aspects of rules do not precede our formulation of them. There are unquestionable cases in which disagreements or uncertainties about meaning are resolved by appeal to linguistic rules. Yet, it is a mistake to suppose that those rules somehow preceded our use of them in resolving our disputes.
Bach, Kent. 1997. The semantics-pragmatics distinction: what it is and why it matters. Linguistische Berichte, 8, 33-50.
Carston, Robyn. 2002. Linguistic Meaning, Communicated Meaning and Cognitive Pragmatics. Mind & Language 17, Nos 1 and 2 (February/April): 127-148.
Kripke, Saul. 1982. Wittgenstein on Rules and Private Language. Harvard University Press.
Recanati, Francois. 2002. Does Linguistic Communication Rest on Inference? Mind & Language 17, Nos 1 and 2 (February/April): 105-126.
Stanley, Jason. 2000. Context and Logical Form. Linguistics and Philosophy 23: 391-434, reprinted in Stanley 2007.
Stanley, Jason. 2002. Making It Articulated. Mind & Language 17, Nos 1 and 2 (February/April): 149-168.
Stanley, Jason. 2007. Language in Context. Oxford University Press.
Wittgenstein, Ludwig. 1953/2001. Philosophical Investigations. Blackwell Publishing.