Philosophy, Film, Politics, Etc.

Saturday, December 13, 2014

Thought, Speech and Interpretation

I was thinking about how we sometimes speak without editing ourselves, and we can have the experience of hearing ourselves as if we didn't know what we were going to say.  This kind of immediacy of speaking can be discomfiting.  Perhaps it is because some of us (and I count myself here) are so used to thinking about what we are saying while we are saying it that we confuse the thinking about what we are saying with the actual act of saying something.  And it occurred to me that we could make this mistake about other mental processes, too.

So I asked the question:  Is thinking about what you are saying while you are saying it the same as thinking about what you are interpreting while you are interpreting it?

Can we imagine a situation where a person says one thing but thinks they are saying something else? I think it is possible.

And I suppose we can imagine a case where a person interprets a text one way, but thinks they are interpreting it differently, too.

Tuesday, November 25, 2014

TOK: Reason and Emotion

The following is a post I put together for my TOK class this year:

 We're often told to listen to reason and not our emotions.  Emotions might help us in the moment, but they won't help us in the long run.  Emotions are about immediate gratification (getting what you want right away, living for the moment), not long-term planning.  Emotions are wild and unpredictable.  Reason is domesticated, calm and respectable.

But is that really so?

Or is the truth more like this:  People who try to change your mind about something by telling you not to follow your emotions are actually being hypocrites.  When they tell you not to follow your emotions, they are actually appealing to your emotions.  They are appealing to your sense of responsibility, and where does responsibility come from, if not emotions?

Emotions give us love, empathy, compassion, joy and excitement.  Emotions may just be the glue that holds society together.

Consider this scenario:  You don't want to do your homework--you'd rather go out with some friends.  A voice in your head says, "Aw, the homework isn't so important.  You can get it done during a break tomorrow.  It won't be great, but it'll be fine.  Just go out and have some fun!"

Then another voice says, "Wait a minute, now.  Let's be responsible.  You know that if you don't do the homework tonight, it's not going to get done properly.  You might get a bad grade, and you won't learn the material."

The first voice returns:  "Aw, you're no fun.  Come on, let's have some fun for once!"

The second voice answers:  "Fun?  Is that all you care about?  What about your education?  What about your future?"

I'm sure you've had similar arguments in your head about all sorts of things.  Is this a fight between reason and emotion?

We might say that reason is the voice that is concerned about the future, about education and responsibility.  We might say that emotion is the voice that wants to have fun with friends, and which is trying to justify not doing the homework.  Emotion is the voice of rationalization.  So reason seems smarter, perhaps, but also totally boring and a real downer.

But we don't have to look at it that way.  Actually, I don't think we should look at it that way at all.

First of all, there are reasons to go out and have fun.  Not every homework assignment is going to make that much of a difference.   That argument about your education and your future all hinging on this one homework assignment?  That's a very bad argument.  Why should you think that your entire future is going to be destroyed because of one homework assignment?  It's not like the first voice was saying that all homework is a waste of time, and that you shouldn't do your school work at all.  The first voice was just talking about one homework assignment and one night.  So the so-called "voice of reason" here wasn't being very reasonable.

We can easily be misled into thinking that we are listening to the voice of reason, when all we are actually hearing is a very bad argument.

This is not a fight between reason and emotions.  It is a fight between two different points of view:  One view is that you need a break and going out with friends is more important than doing your homework.  The other view is that doing your homework is more important than going out with friends.  Both views rely on reason and emotion.

QUESTION 1:  Can you think of any real situations where you had a conflict between reason and emotion?  How do you know it was not just a conflict between two different points of view, each with their own emotions and reason?

Emotion keeps us interested in the world and our role in it.  If we had no desires or feelings, we would have no motivation to act.  Without emotion, our reason would be a cold, heartless tool.  In fact, we might not be able to reason at all if we didn't have emotions.  What motivates us to formulate arguments in the first place?  What motivates us to accept premises?  Remember: no matter how well-reasoned your argument is, your conclusion is only as good as your premises, and those can't all be based on reason.  If we had no emotions, we would have no reason to use reason.

Yet, there is a common belief that reason and emotion are against each other.  It's a very, very old idea, going back many centuries.  In fact, the idea that reason and emotion are enemies is such a well-established part of Western culture that it was used in the 20th century for propaganda. And so we have the 1943 Disney cartoon, "Reason And Emotion."

(The actual cartoon starts about 30 seconds into the video.)

This unfortunately very sexist cartoon was one of numerous wartime propaganda films that Disney made for the US Government in the early 1940s.  On the surface, the cartoon appears to be about the dangers of being led by our emotions.  That is not what the film is really about, though.  The purpose of the film is not to educate Americans about human psychology or theory of knowledge.  It is to increase support for the American war effort.

The propaganda really begins in the middle of the cartoon, when we see John Doe, an everyman, sitting at home trying to "keep up with current events."  He does not know who to believe or what to think:  On the radio, in the newspapers, in the streets, everywhere he looks he hears people talking about the war, about how America is doomed, about how it is a waste of money.  His emotions are driving him crazy.  Then the friendly narrator's voice comes in to guide him away from his emotions and towards reason.  And, of course, reason tells him that America should be in the war and everything is going to be okay, so stop worrying and just be happy.

The irony is that the narrator does not really lead us away from emotions at all.  Instead, we are given exaggerated representations of Hitler which appeal heavily to our emotions.  Apparently reason and emotion have a common enemy:  Nazi Germany.  At the end, we are told that reason and emotion should be patriotic--notice that patriotism is an emotion--and they should fly together.  If our emotions are good and healthy (in other words, if they are patriotic), then they will let reason drive.

The conclusion of the movie is very clear:  It tells us that any Americans who oppose the war are unpatriotic and led by emotions.  Of course, the cartoon does not appeal to reason--we are not given factual reasons to support the war--but only to emotion.  But it creates the illusion that we are following reason, and that is the key.

Again, it seems that when we are told that we must choose between reason and emotion, we are being misled.

QUESTION 2:  Why do we distrust emotions?  Perhaps because we think that emotion and reason are at war.  Where does this idea come from?

QUESTION 3:  What if reason and emotion don't compete for the driver's seat?  What if we need a totally different metaphor to understand the relationship between reason and emotion?  Can you think of any other possibilities?

Perhaps reason is the navigational tools on a sailboat, and emotion is the water and wind that keeps it afloat and moves it forward.

Or maybe reason is a flashlight, and emotion is the bulb that glows.  Or is emotion the flashlight and reason the bulb?

QUESTION 4:  The ultimate question is, in our quest for knowledge, how do we know when we can trust our emotions and the emotions of others?

TOK: Language, Identity and Community

The following is a post I put together for my Theory of Knowledge class this year:

How important is your language for your sense of identity--your identity as an individual, but also as a member of a nation?  It's common nowadays to associate a nation with a language, even though many nations have more than one national language.  Should a nation be defined by a single language?

Consider what political factors have shaped the language that you speak.  Why do you speak Polish, Flemish, Danish, Czech, German, Russian or English?  Why did you grow up learning your native tongue, and why are you learning new languages today?  Are you learning new languages so that you can join new knowledge communities?  Bigger knowledge communities?  Better knowledge communities?

Communities rely on communication.  Community, communicate:  Both words come from the latin root, communia, meaning a large gathering of people sharing a way of life. Communication is not simply about sharing information.  Some say language is primarily for persuasion:  for getting people to think and act the way you want them to.  We communicate, ultimately, to arrange a shared way of life.  Language helps us work together; it shapes our expectations, allowing us to create very sophisticated maps of ourselves and the world around us.  But it also gives us a shared identity, and keeps foreigners out.  It brings people together, but it also builds walls.  It controls and limits, perhaps as much as it guides and enables.

The people who control language have control over the community.  Who controls the language in your knowledge communities?  (Think of the languages of science, of art, of culture, of politics, of education.)  What gives them that power?

Have you noticed how language can shape your political views?  Have you ever criticized a nation or a political faction for the way they talk?  Are there political conflicts in your home country that involve language?

In America, there are some cities with large Spanish-speaking populations.  Should those cities have Spanish street signs?  Should there be government agents in those cities which are fluent in Spanish?  Or should the residents in those cities have to become fluent in English?  Some Americans say that all Americans should speak English, but this is a controversial topic in America.  Are there similar issues in your home country?

There can be benefits to having a shared language, of course.  One benefit is that language helps us share information, and this is necessary to create a knowledge community.  Do we need a shared language to have shared knowledge, though?

Imagine you and a friend visit a beautiful landscape and watch the sunset together.  You do not speak about it--and maybe you don't even speak the same language.  But you have shared an experience, and that gives you shared knowledge.

Imagine you want to teach a friend how to tie their shoes, but you don't speak the same language.  You can still instruct them with gestures.  You can show them how to do it, and so you can share your knowledge, even without a shared language.

When it comes to more abstract ideas, however, you need a shared language if you want to share knowledge.  The problem is, which language should be shared?

This is a political issue that has an influential history.  Some people believe that their language is just better than all the others.  A couple of centuries ago, people in Germany started to take this idea very seriously.  They believed that their language was pure, original and natural, and that other European languages were corrupt and weak.  The modern German language was still being formed in the 18th century and German nationalism was growing rapidly, with dreams of unification.  As you can imagine, some people felt a very strong connection between the need for a shared language and the swelling tides of nationalism.  People started to believe that the very identity of a nation was reflected in its language.  German intellectuals believed that the power of the German mind and spirit was determined by its language.  This idea became known as linguistic determinism, which says that language determines what you can think.  (These days, experts are more likely to believe in a weaker view, called linguistic relativity, which says that your language only influences what you think.)

The belief in linguistic determinism was very racist.  For example, Johann Gottlieb Fichte (1762-1814) wrote: "the German speaks a language that has been alive ever since it first issued from the force of nature, whereas the other Teutonic races speak a language which has movement on the surface but which is dead at the root."  In other words, languages like English, French, Dutch, Flemish, and so on--these languages were all inferior to the pure, original German language.

Another German, Johann Gottfried Herder (1744-1803), was one of the first to promote the idea that a nation was defined by its language. He wrote the following lines of poetry in 1772, which are rather offensive to the French (and other non-Germans):

Look at other nationalities.  Do they wander about
So that nowhere in the whole world they are strangers
Except to themselves?
They regard foreign countries with proud disdain.
And you German alone, returning from abroad,
Wouldst greet your mother in French?
O spew it out, before your door
Spew out the ugly slime of the Seine.
Speak German, O You German!
While Herder wrote poetry, Fichte believed that simpler language was necessary to unite the German folk.  The Brothers Grimm agreed.  They believed their beloved book of fairy tales, published in 1812, was authentically German and could unite the nation with a common language and cultural heritage.  Around the same time, Wilhelm von Humboldt (1767 - 1835), a German philosopher, linguist, Minister of Education, diplomat and founder of the University of Berlin, also promoted the idea that a people is defined by their language.  He wrote:
Language is deeply entwined in the intellectual development of humanity itself . . . Language is . . . the external manifestation of the minds of peoples. Their language is their soul, and their soul is their language. . . . The creation of language is an innate necessity of humanity. It is not a mere external vehicle, designed to sustain social intercourse, but an indispensable factor for the development of human intellectual powers . . . .
In other words, language is not just a tool for communication; it is a fundamental property of humanity.  We would not be human--we would not have our advanced intellectual powers at all--if it were not for language.

On the one hand, the belief in linguistic determinism helped develop Germany into a remarkably strong nation which would come to lead the world in the arts and sciences.  However, the same belief fostered racism and helped pave the way to war and genocide in the 20th century.  Ideas like "linguistic purity" and "linguistic determinism" can be dangerous; however, that does not mean they are wrong.  They are powerful ideas and should be treated with caution.

Consider other ways language can alienate or oppress people.  When you learn a new area of knowledge, like a science or art, you learn a new language.  The more advanced the field, the more alien the language.  Expert languages can be alienating and can even be used to oppress people.

Even common language can be used to oppress people.  For example, poor people tend not to finish secondary school or go to university.  Their language skills are often noticeably weak.  They tend to speak in ways which are usually not accepted in professional or formal situations.  This can make it very difficult for them to move up in society and improve their economic situation.

Another interesting case is so-called "Black English," which I encourage you to read about.  Basically, the idea is that many black Americans have not been able to get a proper education because their unique language has not been respected, or even recognized, by schools.  Imagine being a child at a school that did not recognize that your language was significantly different.  You were told that your speech was simply wrong, even though it was how you were raised and how your family talked.  You were basically taught that your community was inferior.  What kind of psychological effects might that have on a child?

Can one language be inferior to another, or are all languages equal?  This is often a political question, as history has shown us.  To avoid war and oppression, should we just say that all languages are equal?  What if some languages really are better than others?  What if we can improve lives and our communities by improving our language?

Well, how do you improve a language?

One belief, which was popular in the early 20th century, was that a perfect language can be created: the language of logic.  It was believed that all the ambiguity and confusion that arises with natural languages could be avoided.  All we needed was a system of logical symbols and we would be set.

Another belief, which actually goes all the way back to Galileo, if not older, is that mathematics is the ultimate language, the only pure language with which we can understand the world.  Many modern physicists agree.  When you try to put physics in common language, you end up with nonsense.  You can only understand the world with mathematics.

On the other hand, there is the point of view of Nobel prize-winning Danish physicist Neils Bohr (1885-1962).  Bohr was one of the pioneers of Quantum Mechanics; yet, he famously said that anybody who claimed to understand it didn't really understand it at all!  One of the key ideas in Quantum Mechanics is complementarity.  Two properties are complementary if they cannot both be known at the same time.  For example, position and velocity are complementary:  The more you know of an electron's position, the less you can know its velocity; the more you know its velocity, the less you can know its position.  Bohr once claimed that for every measurable quantity, there was another which was complementary to it.  He was then asked, "What quantity is complementary to truth?"  He replied, "clarity."  In other words, the more you have truth, the less you have clarity; and the more you have clarity, the less you have truth.

Ludwig Wittgenstein (1889-1951), an Austrian-British philosopher, made a related observation in the middle of the 20th century.  While some philosophers were trying to find perfect clarity through logical analysis, Wittgenstein realized that ordinary language is clear enough.  And when we try to "fix" it with logical analysis, we actually make it worse.  He wrote:
When I say: "My broom is in the corner",—is this really a statement about the broomstick and the brush? Well, it could at any rate be replaced by a statement giving the position of the stick and the position of the brush. And this statement is surely a further analysed form of the first one.—But why do I call it "further analysed"?—Well, if the broom is there, that surely means that the stick and brush must be there, and in a particular relation to one another; and this was as it were hidden in the sense of the first sentence, and is expressed in the analysed sentence. Then does someone who says that the broom is in the corner really mean: the broomstick is there, and so is the brush, and the broomstick is fixed in the brush?—If we were to ask anyone if he meant this he would probably say that he had not thought specially of the broomstick or specially of the brush at all. And that would be the right answer, for he meant to speak neither of the stick nor of the brush in particular. Suppose that, instead of saying "Bring me the broom", you said "Bring me the broomstick and the brush which is fitted on to it."!—Isn't the answer: "Do you want the broom? Why do you put it so oddly?"
What is clear to you might just depend on what you are expecting; it depends on your map.  Would a perfect language give us a perfect map?  What would the perfect language be like?

TOK: Sense Perception and Illusions

What follows is a collection of illusions I put together for my Theory of Knowledge class this year: 

Everybody's familiar with optical illusions, but there are other kinds as well.  We experimented with tactile illusions in class and I mentioned that there are also aural illusions.  Have you experienced any other kinds of illusions?  Illusions of taste or smell?

Here are some optical and auditory illusions to enjoy.  What do they reveal about the limits of sense perception?

First, an image.  When you look at it for the first time, you might not see a pattern at all.  It just looks like random black spots on a white background.  But eventually, all of a sudden, you can see a picture. Once you see it, you cannot unsee it.

Here's another one with the same effect:            

These examples raise the question:  How much of what we see depends on what we have learned to see?

The same phenomenon can occur with sounds. Here's an audio track with a stunning demonstration. You will hear a sentence which has been digitally altered to sound like gibberish.  You won't be able to figure out what the voice is saying.  Then you will hear the original sentence. When you then hear the digitally-altered recording again, it suddenly won't sound like gibberish anymore.  You will now be able to recognize the sentence.  Try it!

You may have experienced a similar effect--though not quite so dramatically, I'm sure--when you started learning a new language.  When we learn a language, we have to learn how to hear the speech patterns.  When you hear a new language for the first time, it's not just that you don't understand the words; you cannot even hear them as words at all.  It just sounds like gibberish.  Then, eventually, you can hear specific words, even if you don't know what they mean yet.  You have learned how to perceive the speech patterns.

Do we also learn how to perceive smells or tastes?  Is it possible that we learn how to perceive by touch as well?  Perhaps all sense perception relies on prior knowledge.  We perceive because we know how to perceive.

Is this a kind of map knowledge?  Remember, map knowledge is all about expectations.  If we know how to perceive, that could mean that we have expectations which guide the way we perceive.  This raises the question:  How much does sense perception rely on our expectations?

Here's a visual test.  Are you aware of what you see?  (This one's better big, so expand to full screen if you can.)

Sometimes our expectations for one sensory organ can be altered by another, and this affects what we perceive.  For example, as you probably know already, what you smell affects what you taste.  Did you also know that what you see can affect what you hear?  Welcome to the McGurk Effect!

Sometimes a 'b' sounds like a 'b', and sometimes like an 'f', depending on what you see.  But here's a tricky question:  Do you actually hear the 'f', or do you just think you do?

In other words, should we say that your perception has changed, or should we say that you are wrong about what you are perceiving?  What's the difference?

Now here's another audio illusion--an illusion of music.  This link will take you to a new page where you will hear a woman talk in a normal speaking voice.  She says, "The sounds as they appear to you are not only different from those that are really present, but they sometimes behave so strangely as to seem quite impossible."  Then the part where she says "sometimes behave so strangely" repeats over and over again, and eventually it starts to sound like music.  You will hear melody in her words and you will hear her singing, even though it is just a repetition of the same, spoken recording.  Her normal speech becomes (or seems to become) music through mere repetition!

When the recording is over, start the recording again from the beginning.  It will sound like she is talking in a normal voice again, but when she gets to "sometimes behave so strangely," it will sound like she suddenly begins to sing!

The same recorded words can sometimes sound like speech and sometimes like song.  In this case, the change is not because of what you see, but because of what you have heard in the past.  Our past experiences can change the way we perceive the world.  Again, however, we have a difficult question:  Do you actually hear music in this case, or do you just think you do?  What is the difference?

Finally, another auditory illusion.  Listen to this short musical recording, and then play it again, and again.  It's like it changes every time, getting higher and higher in pitch.

Do we actually hear the recording at a higher pitch each time we play it, or do we just think we do?

Sunday, October 26, 2014

A Deductive Model of Inductive Reasoning

There is a widespread belief that inductive reasoning has the following two characteristics:

  1. It entails making a general or predictive claim based on past observations.
  2. The conclusion does not follow as a matter of logical necessity from the premises.

It is also commonly supposed that the second of these characteristics follows necessarily from the first.  It is believed that any time we make general and/or predictive claims based on past experiences, we are drawing a conclusion which does not follow as a matter of necessity from our premises.

I think this is incorrect.  I will take the first characteristic as a defining feature of inductive reasoning and show that the second characteristic does not obtain.  In other words, I will provide a deductive model of inductive reasoning such that (1) a general or predictive claim is based on past observations, and (~2) the conclusion follows as a matter of logical necessity from the premises.

I will take as a starting point an example from Hume:

Premise:  The sun has risen in the east every morning until now.
Conclusion: The sun will rise in the east tomorrow.

As stated, this argument is incomplete--or, rather, the full set of premises are hidden.  They can be explicitly formulated in deductive form as follows:

Premise 1:  The sun has risen in the east every morning until now.
Premise 2:  Some X causes the sun to rise in the east in the morning.
Premise 3: Unless some Y prevents X from causing the sun to rise in the east in the morning, the sun will continue to rise in the east in the morning.
Premise 4: If the sun continues to rise in the east in the morning, the sun will rise in the east tomorrow.
Premise 5: There is no Y preventing X from causing the sun to rise in the east in the morning.
CONCLUSION: The sun will rise in the east tomorrow.

This is a deductively valid argument.  Furthermore, I believe it adequately represents how inductive reasoning (of the sort indicated in Hume's example) actually occurs.

Of course, we can question the truth of any or all of the premises.  That, however, is not the point.  The point is that (1) and (~2) obtain.

The model can be generalized as follows:

P1: A has been observed to occur in condition B.
P2: Some X causes A to occur in condition B.
P3: Unless some Y prevents X from causing A to occur in B, A will continue to occur in B.
P4: There is no Y preventing X from causing A to occur in B.
CONCLUSION: A will occur in B.

It may be observed that P2 and P3 imply determinism.  P2 says that the repeat occurence of A in B is the result of a cause--it is determined by X.  P3 states that the same effects will follow from the same causes in the same conditions unless a new condition is introduced which negates the cause.  (This new condition may simply be the absence of the cause, or it may be a counter-cause.)  These premises may be questioned.  We may ask what justifies our acceptance of them.  However, such openness to questioning  does not negate the formal validity of the argument.  To say that an argument is formally valid is only to say that it is coherent and that the conclusion follows necessarily from the premises.  Inductive reasoning is, on my account, formally valid.  It is a case of deductive reasoning.  So we can accept (1) whilst rejecting (2).

The two weaknesses of inductive reasoning seem to be these:  First, we cannot be sure that the same effects will always follow the same causes in the same conditions; second, we cannot be sure that the same causes are working in the same conditions--i.e., we can never be sure that P4 holds.  We might simply be ignorant of some future Y which will prevent X from causing A to occur in B.  Again, however, these weaknesses do not affect the formal validity of the reasoning.  They only affect the soundness of particular instances of inductive reasoning.  Furthermore, one can be justified in believing that which is not certain; so the fact that we cannot be sure does not mean we cannot be justified in accepting the premises.

Update:  In response to a helpful commenter, I have more directly addressed the well-known Humean "problem of induction."  I wrote in the comments section below: "Empiricists like Hume hold that [some of the premises my deductive schema relies on] cannot be justified except inductively, which makes inductive reasoning circular. However, if we take a Quinean perspective and reject Humean empiricism--if we say that all premises, even simple observational statements, are theory-laden--then there is no simple distinction to be drawn between observational statements and the premises required for induction. So the epistemological problem is no longer a problem of induction per se; it is rather a problem of how we justify premises in general. At least, that is the sort of direction I'm leaning in."

Thursday, September 25, 2014

Sexism, Gender and Neuroscience

A recent Guardian article by Robin McKie attempts to undermine an allegedly sexist tendency plaguing the field of neuroscience.  I have no idea how widely or deeply sexism flows through the annals of neuroscientific research, but I'm willing to assume it's pretty wide and pretty deep.  That's not a jab at neuroscience per se.  There's compelling evidence that sexism is virtually everywhere, so why not neuroscience?

The alleged manifestation of sexism in McKie's sights is the idea that there is such a thing as a genetically determined male or female brain.  McKie's thesis is this:  Men and women think and act differently because they have been raised to think and act differently, and not because they are genetically predisposed to think and act differently.  She accuses the field of neuroscience of engaging in a cover-up: Intentionally or not, neuroscientists have been misleading us about the real causes of gender difference:  The culprit is cultural bias, not biological determinism.

Most biologists will tell you that nature and nurture go hand in hand.  All behaviours and cognitive capacities should be considered the result of both environmental factors and genetic predispositions. You would be very hard-pressed to find a neuroscientist who denied that cultural factors played any role at all in neurological development.  The question is, how much of a role does culture play?  Are there some behaviours or capacities which women or men are genetically more likely to express?

McKie's answer is clear:  There are no cognitive capacities or behaviors which men or women are genetically more likely to express.  (Obviously she's not including things like ovulation or breast-feeding.  She's talking about behaviors which are not forced or limited by our reproductive organs.)

McKie's thesis is politically attractive:  It can seem like a useful weapon to wage in the war against sexism.  However, it is not supported by the science and it runs counter to common sense.  Common sense tells us that men and women have all sorts of genetically determined differences, so why shouldn't we think there are important neurological differences as well?  It would be more than a little surprising if it turned out that of all the genetically gendered parts of our bodies, our brains were not one of them.

McKie makes two mistakes which are worth highlighting.  The first is that she falls for the line-drawing fallacy. She approvingly quotes Professor Dorothy Bishop, of Oxford University:

"They talk as if there is a typical male and a typical female brain – they even provide a diagram – but they ignore the fact that there is a great deal of variation within the sexes in terms of brain structure. You simply cannot say there is a male brain and a female brain."
Consider an analogy:  There is no such thing as a typical hurricane or a typical tropical storm.  There is a great deal of variation within hurricanes and tropical storms, in terms of wind speed, precipitation and threats to various ecosystems.  Therefore, you simply cannot say there is a hurricane and a tropical storm.

The fact that there is variation within groups, or even that there is not always a clear, dividing line between them, does not mean that there is no sense in recognizing the groups at all.  Vagueness is not always hopeless.  While Professor Bishop is surely right about the variety within male and female brains, we should not jump to the conclusion that these categories are useless--especially since there is scientific research which indicates significant differences between them.

Perhaps McKie has a point, however:  Perhaps neuroscientists are jumping to conclusions about male and female brains.  What scientific evidence is McKie discussing?  This takes us to her second error.  She is discussing a new study by a team led by Professor Ragini Verma:
Verma's results showed that the neuronal connectivity differences between the sexes increased with the age of her subjects. Such a finding is entirely consistent with the idea that cultural factors are driving changes in the brain's wiring. The longer we live, the more our intellectual biases are exaggerated and intensified by our culture, with cumulative effects on our neurons. In other words, the intellectual differences we observe between the sexes are not the result of different genetic birthrights but are a consequence of what we expect a boy or a girl to be.
There are three points to consider here.  First, the fact that a finding is consistent with McKie's view does not mean it entails her view.  There may be an entirely genetic reason why neuronal connectivity differences between the sexes increase during development.  In fact, developmental differences between genders is very often the result of delayed gene-expression.  Just as puberty is genetically determined to occur at a particular time in life, so too may neuronal differences be genetically determined to kick in at various stages of development.

Second, McKie has given a somewhat misleading presentation of the results of Verma's study.  Verma's conclusion is about key stages in neurological development, not about how the brain changes "the longer we live."  Unsurprisingly, most of the differences that Verma's team have observed appear during puberty. Verma's conclusion is that this research suggests a biologically optimized gender difference in neuroanatomical development.  That appears to be a valid conclusion.

The third point to consider is that McKie is promoting a false dilemma.  Even if we say that cultural factors lead to a greater divergence between male and female brains, there may be a genetic explanation for that.  It's not necessarily EITHER genes OR culture.  It could be genes AND culture.  It may be that men and women are genetically disposed to have brains which are different, and these differences are statistically likely to produce cultural institutions which reinforce those differences, leading to an even greater divergence between male and female neuroanatomy.  Alternately, it may be that genes are not ultimately responsible for many of the observed differences between male and female brains, but they still may be responsible for some, or even most, of them.

McKie ends by giving us a scientific basis for questioning Professor Verma's findings.  That is in the spirit of good science, but what conclusions should we draw?

The lesson I take from this situation is not that men will be men and women will be women.  Rather, it is that we don't yet know what men will be, or why, or what women will be, or why.  Not knowing is liberating.  It means nobody can tell women what they will be, and nobody can tell men what they will be.  We can enjoy not knowing, and we can stop pretending like we have more answers than we really do.

Update:  It's also worth pointing out that we should look out for the naturalistic fallacy here.  The fact that we are genetically determined to be a certain way doesn't mean we should be that way.  We can (and often do) resist genetically determined differences if we don't like them.

Sunday, July 27, 2014

Leiter on Rape and Sexual Assault in Illinois Law

Professor Peter Ludlow's anti-defamation case was dismissed recently, and Professor Brian Leiter is upset about it.  He says the case should have been a "slam dunk," and argues that the dismissal is most likely the result of bias on the part of the judge.  However, Professor Leiter's argument is not sound.  He hasn't even gotten the facts straight.  In response to an argument by Professor Heidi Lockwood, he says:

"Illinois defines rape as "criminal sexual assault" involving "sexual penetration"  . . . There was never an allegation of sexual penetration against Ludlow by the undergraduate student, so there was never an allegation of criminal sexual assault, i.e., rape."
If you follow the link, you will see that Illinois does not define "rape" as "criminal sexual assault."  The word "rape" does not appear anywhere on that page.  In fact, Illinois law does not define "rape" at all.

Professor Leiter says there is a basis for claiming Professor Ludlow was accused of sexual assault, but not rape. His reasoning is that "sexual assault" is not the same as "criminal sexual assault."  He defines "rape" as "criminal sexual assault" and he says this is not the same as "sexual assault."  However, Illinois law does not distinguish between any of these categories, so Leiter's claim is questionable.  He appeals to the fact that the defendants changed the wording in their news articles as support for his contention that there is a significant difference in meaning between "rape" and "sexual assault."  However, it is reasonable to think that the articles were changed upon request in order to avoid legal problems.  The changes are not an admission of wrong-doing and cannot be used to prove any difference between "sexual assault" and "rape."

Judge Flanagan observes that the terms "rape" and "sexual assault" are sometimes used synonymously in common language, and so the word "rape" helped give a reasonably accurate summary of the charges against Professor Ludlow. As she says, Merriam-Webster, which happens to be the dictionary Professor Ludlow brought to the table, lists these terms as synonyms.  Furthermore, I've also discovered that Illinois State University claims "rape" and "sexual assault" are synonymous.   Professor Leiter disagrees, but he seems to be relying exclusively on his intuitions.

Professor Leiter does have a point:  The term "rape" can have a stricter definition which can, in some people's minds, suggest a harsher crime.   (Update:  In her reply to Professor Leiter, Professor Lockwood claims that the stricter definition shouldn't have a harsher connotation.)  However, I cannot see how this is an easy or straight-forward point to adjudicate. It does not seem like a slam dunk case.  Perhaps Judge Flanagan was erring on the side of caution in the interests of protecting the freedom of the press.  If a person who is accused of X can successfully sue a media outlet because they think the wrong word was used to describe the accusation, even though the law does not distinguish between the terms and even though there is substantial evidence that people and institutions use the words interchangeably, then the power of the press will be severely limited.

Perhaps Judge Flanagan made some mistake in her reasoning, but that is not obvious to me.  What is obvious, I think, is that the accusation of bias is unfounded.

Update:  A couple points to add.  First, The Women's Center's page on Illinois law makes it hard to distinguish between "rape" and "sexual assault" at all.  Second, after looking at the Illinois criminal code of 2012, I have to wonder if Professor Ludlow might have been more properly accused of sexual abuse, and not sexual assault.

Second Update:  Professor Lockwood has replied to Professor Leiter, offering some socio-historical explanation for why we cannot and should not presume there is a relevant distinction between "sexual assault" and "rape."  Unfortunately, Professor Leiter's latest reply to Professor Lockwood (in an update to his original post) seems to have missed the point.  Professor Lockwood also makes the same point I do about the newspaper's correction not being an admission of wrongdoing, but Professor Leiter did not respond to that point.

Friday, July 11, 2014

Deepak Chopra's Challenge

Deepak Chopra, a well-known spiritualist, does not like physicalism. Physicalism says that everything is physical and can be entirely explained in non-mental terms.  So, according to physicalism, consciousness can ultimately be explained in physical, non-mental terms.  Chopra says this view is an unreasonable dogma, and that it makes much more sense to believe that thoughts are non-physical.  When we think, he says, non-physical phenomena affect the workings of the brain.  To add muscle to his position, he has issued a challenge:  According to the New Statesman, he will award a prize of $1 million to anyone who can demonstrate, in a peer-reviewed scientific journal, that human consciousness is created by the brain.   This is not a challenge to demonstrate that Chopra has misunderstood physics, nor is it an invitation to critique his own arguments about consciousness.  The challenge is to scientifically demonstrate that brains create consciousness.  That's it.  If we cannot meet that challenge, he says, then physicalism is unreasonable.  The supposition is that, if his challenge cannot be met, then his spiritualist view is more reasonable than physicalism.  In fact, Chopra thinks it is more reasonable from a scientific point of view.  He makes two arguments to this effect, but neither of them works.

First, he makes an argument appealing to neuroscience.  He says that we have no evidence of brains creating thought, but we do have evidence of thoughts affecting our brains:  "our thoughts are creating molecules all the time - the chemical makeup of the brain is altered with every thought, feeling, and sensation. That is indisputable."  It is a given that thinking alters brain chemistry.  That does that mean that thought creates molecules.  In any case, despite Chopra's assertion to the contrary, there is evidence of neurological events affecting how we think:  Just notice the effects of drinking a few glasses of wine, or the power of anesthesia.  If anything is indisputable, it is that chemical processes affect our thoughts and can dramatically alter our ability to think clearly, or even at all.  It would be great if Chopra had evidence of non-physical stuff affecting the workings of the brain, but he doesn't.  So why not think that thinking just is electrochemical reactions in the brain?  The physicalist position seems like the simplest explanation and Chopra hasn't given any reason to think it comes up short.

Though the neurological evidence is not on Chopra's side, he also claims to have evidence from physics. His second argument for spiritualism relies on what that New Statesman article describes as "the observer effect":  'Quantum physics's observer effect - whereby observing an event at the quantum level changes the outcome of the event - is taken by Chopra to be proof that he's right about consciousness . . ."  Scientists have questioned his understanding of the physics (e.g., see the article in the link above), but that is not necessary.  Even if we suppose he has correctly understood the physics, his argument does not work.  It is inconsistent.

Chopra claims that non-physical consciousness is responsible for the observer effect.  He denies that the brain, or any other complex physical system, is responsible.  For, if some complex physical system were responsible, then the effect would not be proof of non-physical consciousness.  Chopra would need some additional evidence that it was non-physical consciousness, and not any complex physical system, which was responsible for the observer effect.  Since Chopra has no evidence, he must be assuming that complex physical systems are not necessary for the observer effect.  Thus, Chopra's argument requires the following claim:

          (1) Complex physical systems are not necessary for the observer effect.

The observer effect is the result of the role of the observer in an experimental setting.  Thus,

          (2) Experimental observation is necessary for the observer effect.

Complex physical systems are necessary to make observations in experimental settings.  In addition to the physical properties of the experimental setting, there are also the physical properties of the observer.  So (3) must also be true:

          (3) Complex physical systems are necessary for experimental observation.

From (2) and (3), we can conclude:

          (4) Complex physical systems are necessary for the observer effect.

This contradicts (1).  If we accept that the observer effect is a real phenomenon that occurs when observations are made in experimental settings, then (1) is false.  Chopra must acknowledge that complex physical systems are necessary for the observer effect.  And in that case, he has no reason to think that the observer effect is the result of non-physical causes.  Thus, the argument does not work.

The bottom line is, without evidence of non-physical causes, Chopra's view is not more scientifically reasonable than physicalism.  It is quite possible that physicalism is the simplest option on the table.  This does not prove that physicalism is true.  It does not show how brains or any other complex physical systems might create consciousness, either.  However, it does show that Chopra does not have an argumentative leg to stand on.

Thursday, July 10, 2014

Calling Stupid On Winter Soldier

I've been reading reviews of Winter Soldier this morning (I watched it last night for the first time) and I've only found two that straight out call it dumb.  Here's one.  Here's another.  I agree.  It is dumb.  Prometheus levels of dumb.  But it has a complicated plot and it and touches on issues that are important to people (drone attacks, government surveillance, freedom, patriotism, friendship and even romance), so I can understand why less critical audiences might think it makes more sense than it actually does.

Not that it has to make sense.  You can enjoy it just for the spectacle.  I didn't.  I thought the first half-hour was mediocre and the rest went from bad to worse.  There wasn't much in terms of memorable action sequences and the characters are dull.  The only interesting character in the film is Sam Wilson/Falcon, but even his character ends up disappointingly.  That's my opinion, anyway, but who cares?  I don't want to judge the movie as a whole and I don't want to discuss personal tastes.  I want to discuss why I think the movie does not have an intelligent plot.  I also think it is very sexist and a bit racist, too, which I'll discuss as well.  (Oh, and see the update, where I discuss the horrifying hypocrisy in the movie's political message.)

First, the plot.  I recommend reading both of the reviews I linked to earlier.  In addition to the points they make, here's what bothered me most about the plot.

1. Project Insight involves three large "helicarriers" (floating drone warships, basically) which are coordinated and can take out something like a million targets at once.  These three helicarriers are supposed to be able to neutralize all threats all over the world.  Somehow.  Seriously, how?  Their inauguration is supposed to be a mass slaughter of close to a million people.  What is supposed to happen next?  The world will sit back and watch as those three weapons of mass destruction float from city to city, killing millions of people?  This could not possibly turn out well for HYDRA, or even come close to realizing Pierce's ultimate plans.  Project Insight is a stupid, stupid project.

2. Black Widow lifts intelligence from a ship and stores it on a MacGuffin--that is, an over-sized yet futuristic-looking pen drive.  Nick Fury cannot access the information on the drive, which raises the question:  How did Black Widow access it on the ship?

3.  Fury cannot access the information for mysterious reasons, so he tells Pierce (Robert Redford) that he wants to hold off on launching Project Insight.  Fury thinks there is something up at SHIELD and that Project Insight might be compromised.  So what does he do?  He tells the head of that project that he is concerned about it.  Remember, just a few minutes before, we got a long speech about how Fury doesn't trust anyone.  Yet, he trusts Pierce enough to let him know he's suspicious about Insight.

4. Why try to kill Fury when he is in his super car?  Why not have a sniper shoot him before he gets in the car?

5.  Fury manages to escape from the Winter Soldier because he has a nifty device that can burn holes in the ground.  Where do those holes lead?  Why doesn't the Winter Soldier follow Fury into the hole?  This isn't the last time people escape certain doom by drilling a hole in the ground.  These holes seem nothing like holes and everything like magical transportation devices.

6.  Fury goes to Captain America's apartment, knows it is bugged, but still talks.  Openly.  About the fact that he needs to stay there.  Sure, he doesn't mention the assassination attempt (though you would think every superhero would have heard about it already, since it devastated a portion of the city in broad daylight.)  But he knowingly announces to SHIELD, which he does not trust, that he is at Captain America's apartment.  Then he stands up.  Of course he's going to get shot!

7.  Let's get back to that pen drive.  Captain America hides it in a vending machine, where it is plainly visible--and where it will become even more visible if people buy one or two particular candy bars.

8.  Somehow, Black Widow (a hacker? at an Apple store???) manages to determine where the data on the drive was created. It leads them to Zola, who is "living" underneath an abandoned bunker in an abandoned military facility.  Nobody thought it might be important to, I don't know, guard Zola?  Why did HYDRA leave him for dead?

9.  Why was Zola still on that tape?  Okay, let's say a room full of tape machines could store Zola's mind. Unlikely, but okay.  All the information on that tape could have been transferred to a hand-held digital computer.  The tape could have been destroyed, or put into storage, and Zola would have been much more hi-tech.  I guess HYDRA didn't trust Zola, or didn't want him involved with HYDRA anymore.  In that case, why keep him connected to HYDRA's network?  And we know Zola is tapped into HYDRA's network, because he knows the ballistic missile is coming to kill them all.

10. Wait, what?  HYDRA destroys Zola in order to kill Captain American and Black Widow?  Like that was their best option?  And Zola actually tells them the missile is coming, giving them time to prepare and, ultimately, survive?

11.  Why does the Winter Soldier speak Russian (and with a Russian accent) up until he takes his mask off, at which point (and for the remainer of the film) he speaks English with an American accent?

12.  At the beginning of the film, Captain America has a fist fight with Batroc the Leaper, who seems impossibly strong and extremely difficult to kill, or even seriously injure.  Batroc is not supposed to have superhuman strength or powers, so how is this supposed to make sense?

13.  Pierce tries to kill Captain America because he thinks Captain America is lying about Fury. That's just a suspicion, though, and not a good reason to try to assassinate one of his strongest assets.

14.  There is surely a better way to kill Captain America than by cramming a bunch of thugs in a glass elevator with him.  Again, sniper, anyone?  Or poison?  Or just wait until Project Insight is up and running and use the helicarriers!  The elevator idea was almost as stupid as Project Insight itself.

15.  When the good superheroes finally get control of the three helicarriers, why do they have them shoot each other out of the sky?  How many lives did they endanger?  Three helicarriers exploding directly above a city center?  Seriously?

16.  Back to the pen drive again.  So we eventually find out that Fury had hired Batroc to hijack that ship.  And then had Black Widow, Captain America and Rumlow stop them, giving Black Widow the chance to access the data on the computer.  Why was the ship's computer so important?  More importantly, why not have one of the hijackers get the data for him?  Fury puts lives in danger by hijacking the ship, and then puts more lives in danger to have people fight the people he hired to hijack the ship just to get data which the hijackers could have easily gotten?  Ugh.  (Maybe Fury didn't really have the ship hijacked?  In that case, who did and why?  Remember, a high-ranking HYDRA agent was among the captives, and at least one high-ranking HYDRA agent was among the rescuers.)

17.  Here is a list of more plot holes.  Not all of them are legitimate plot holes, but several are good.  In particular, check out 3, 4, 8, 14-18, 20, 24, and 28/29.

The only female superhero in the Marvel Cinematic Universe is Black Widow, and she is only an almost-superhero.  She doesn't have any superpowers to speak of. She's just a highly trained soldier and spy.  (In The Avengers she had an almost-super ability to manipulate people into giving up secret information, but that almost-super ability is nowhere present in Winter Soldier.)  The bigger problem is that Black Widow can't even have a leading role in a superhero movie without playing second fiddle to a male lead.  And, of course, there has to be a strong romantic element, too, or else audiences might get uncomfortable.  Yes, there is an unspoken attraction between her and Captain America.  There is romantic tension and even emotional intimacy between them.  She kisses him more than once:  the first time, he slyly admits to being aroused; the second time, she is clearly emotionally vulnerable.  Also, she cares more about hooking him up with her friend than almost anything else.  She repeatedly talks about it in the middle of a highly dangerous mission at the beginning of the film and then brings it up again at the end of the film.

Black Widow is the only even partially-developed female character in the film.  Of the other four female characters, Agent Hill is the only one necessary to the plot, and she has no personality to speak of at all.  The other two named female characters (Peggy and Kate) are only there because they are past or potential love interests for Captain America.  The fifth and nameless female character is one of the heads of SHIELD, and she disappears during the final act of the film without any explanation.

Finally, the film does not pass the Bechdel test.  It has only one scene where two women talk to each other:  Hill and Black Widow talk for all of five senconds as they watch doctors operate on Nick Fury.  However, they are only talking because they are concerned with Fury's fate, and they are only talking about the investigation into who shot him.  So, while the film does clearly pass the first part of the test (there rae at least two named women in the film), it only barely, by a tiny thread, passes the second part of the test (two women in the film do talk to each other, but only two women, and they do so for all of five seconds).  Winter Soldier does not pass the third part of the test, since the two women are talking about what happened to a man.

Nick Fury still hasn't been given a chance to show he is as much of a badass as he appears to be.  So far, he is all talk and no action. In all nine of the recent Marvel movies, why can't Nick Fury play a substantial role in the action and have a well-developed character?  His character is barely developed in this film.  He is more of a plot device than anything else.  The only character development Fury gets here is that he tells a story about his grandfather the elevator operator.  Fury has got to be over 40, which means his character was born before 1974.  His grandfather would have grown up well before the Civil Rights Act of 1963 and would have experienced Jim Crow first hand.  That might seem like an irrelevant historical detail, but consider it in context.  The writers could have given Fury a much more racially sensitive story about his grandfather.  Instead, he tells of his grandfather the badass who carried a loaded gun with him on his way home from work every day.  This gives heritage to Nick Fury's own badass persona, but it is stereotypical gangsta.  (See my Update 2 below for more on the film's gangsta narrative.)  This is arguably racist, and certainly racially insensitive.

Then there is Falcon, who describes himself as a slower version of Captain America.  He says, and I quote, "I do what he does, only slower."  As it turns out, Falcon does not think for himself.  The only decision he ever makes in the movie is to be loyal to Captain America.  This is a man who left the military and found a meaningful life helping others overcome personal tragedy and loss.  He drops all that, he says, because he can help Captain America.  I guess no deeper motivation than that is necessary.  There's also the fact that, like Black Widow and Nick Fury, Falcon is an almost-superhero.  He is just a highly trained soldier who knows how to use some top-secret gear.  Presumably other soldiers were trained, and could easily be trained, to do the same thing.  Oh, and it turns out Falcon is not going to appear in the next Avengers movie.

Looking at the non-white and non-male characters in this film, I would not say the film does a good job of diversification.  The Marvel Cinematic Universe is still extremely white and extremely male. Fortunately, the next Avengers movie will have a stronger female presence and Guardians of the Galaxy looks significantly more diverse in all respects.  As for more intelligent plots and characters, however, I'm not holding my breath.

Update:  I found another review which calls stupid on the movie.  This one has some interesting observations about the movie's racism and also gets into the hypocrisy of the movie's political message.  I was originally going to write at length about that, too, but my thoughts got too complicated and long-winded.  My basic problem is this:

The movie ends with the "good" superheroes taking orders from Captain America ("Captain's orders!") and then claiming immunity from the criminal justice system.  What is the justification?  Fear.  The American people need Captain America to be above the law, like a fascist dictator, making rash decisions, killing people, endangering countless other lives, stealing military secrets and weapons, and undermining governmental agencies, because:  fear.  That is the line that Black Widow sells.  I'm not saying Captain America or any of them deserve to go to prison, but that is something the criminal justice system is there to determine.  The superheroes are using the same logic that Alexander Pierce used, and which they killed him for.

Update2:  I've transcribed the elevator scene where Fury gives his grandfather monologue:

Captain America (C) and Fury (F) are in the elevator and F has just given C access to Project Insight.  They're riding down towards the hellicarriers.

C: You know, they used to play music.

F: Yeah.  My grandfather operated one of these things for 40 years.  My granddad worked in a nice building, got good tips. He walked home every night, a roll of ones stuffed in his lunch bag.  He’d say hi.  People would say hi back.  Time went on.  Neighborhood got rougher. He’d say hi, they’d say, “keep on steppin.”  Granddad got to grippin’ that lunchbag a little tighter.

C:  Did he ever get mugged?

F:  [laughs] Every week some punk would say, “What’s in the bag?!”

C:  What’d he do?

F: He’d show’em! A bunch of crumpled ones, and a loaded .22 Magnum.  [pause] Granddad loved people.  But he didn’t trust’em very much.

They see the hellicariers.

F: Yeah, I know.  They’re a little bit bigger than a .22.


I'd forgotten that the hellicarriers were directly compared to the guns in the grandfather narrative.  The idea is that, just as a world-weary black man in the ghetto needs guns to keep his hard-earned money, SHIELD needs hellicarriers to protect the good, honest people of the world from evil.  Nick Fury isn't motivated by a desire to fight the racism that his grandfather faced at the hands of white people.  He doesn't even acknowledge that.  Instead, he's motivated by a desire to keep what's his from the violent hands of poor, black hoodlums.  He's a gangsta out to fight gangstas.

Saturday, June 21, 2014

Congratulations, Ryan Born

I've been discussing things to do with Sam Harris' Moral Landscape Challenge lately, but I forgot to congratulate the winner, Ryan Born.  Though it was Harris' challenge, it was Russell Blackford who chose the winning essay.  I've only read a handful of the entries, but I don't recall reading any that were better than Ryan's.  It's a well-written and interesting essay, and I trust Russell to have found the best of the lot.

That said, I'm not thrilled with Ryan's essay.  I like the general strategy of taking up the Value Problem.  However, Ryan's primary tactic is problematic.  He takes up the idea of self-justification in a confused, or at least confusing, way.  Sam Harris has said that he would change his mind if he could be convinced that "other branches of science are self-justifying in a way that a science of morality could never be."  Ryan mistakenly takes Harris to be saying that science is self-justifying.  This allows Harris to reply:  "Contrary to what Ryan suggests, I don’t believe that the epistemic values of science are “self-justifying”—we just can’t get completely free of them."

That isn't the end of Ryan's argument, or Harris' reply, of course.  Ryan does at least hint at difficulties with Harris' approach, and Harris' lengthy reply opens the door to even more objections (some of which I'll get to momentarily).  Unfortunately, Harris has not indicated that he will engage Ryan any further, and we can no longer expect any sort of evaluation from Russell.  (Originally, Russell was going to evaluate Harris' response to the winning essay.  I guess he still might, but probably not on Harris' blog.)  So I'm a little disappointed.  I would have liked to see a winning essay that cut right to the heart of the matter without any confusion.  Not that it necessarily would have mattered.

I think the best strategy against Harris is to point out the absurdity of his interpretation of "should" and "ought." In his response to Ryan, he says, "Some intuitions are truly basic to our thinking. I claim that the conviction that the worst possible misery for everyone is bad and should be avoided is among them."  So we have the following, conceptually basic intuition:

(1) The worst possible misery for everyone is bad and should be avoided.

In another part of his response to Ryan, Harris says strange things about the word "should":

Ethics is prescriptive only because we tend to talk about it that way—and I believe this emphasis comes, in large part, from the stultifying influence of Abrahamic religion. We could just as well think about ethics descriptively. Certain experiences, relationships, social institutions, and technological developments are possible—and there are more or less direct ways to arrive at them. Again, we have a navigation problem. To say we “should” follow some of these paths and avoid others is just a way of saying that some lead to happiness and others to misery. “You shouldn’t lie” (prescriptive) is synonymous with “Lying needlessly complicates people’s lives, destroys reputations, and undermines trust” (descriptive). “We should defend democracy from totalitarianism” (prescriptive) is another way of saying “Democracy is far more conducive to human flourishing than the alternatives are” (descriptive). In my view, moralizing notions like “should” and “ought” are just ways of indicating that certain experiences and states of being are better than others.

If that is correct, and prescriptive "should" statements are synonymous with descriptive statements, then we can restate (1) as follows:

(1*) The worst possible misery for everyone is less conducive to human flourishing and avoiding it is more conducive to human flourishing.

We must remember that Harris is very flexible about what comprises misery and flourishing.  In fact, he defines "flourishing" and "misery" in opposing terms.  Once you realize that, it is clear that his "intuition" is a tautology.  If we accept Harris' view of morality, all (1) means is:  that which is least conducive to human flourishing is less conducive to human flourishing, and avoiding that which is least conducive to human flourishing is more conducive to human flourishing.  According to Harris, that is an intuition that is basic to our thinking.  And somehow it is supposed to sustain moral realism.

The absurdity of Harris' language games is evident.  It would have been nice if Ryan Born had pointed that out, but perhaps Harris will still see fit to respond to the challenge.

Thursday, June 19, 2014

Reflection on my recent encounter at Jerry Coyne's blog

There's one more facet of my recent encounter on Jerry Coyne's blog that I haven't commented on.  My first comment on the thread was a criticism of Coyne.  He was impressed by the number of people who responded to Sam Harris' Moral Landscape Challenge, saying that it shows just how many people take Sam Harris' views seriously.  Here's my criticism:

"I don’t take Sam’s views seriously, and I wouldn’t assume that most, let alone all, of the respondents did so because they take his views seriously. What they presumably take seriously is the opportunity to get published on his blog, earn $2,000 and possibly, just possibly, change his mind. What I take seriously is the fact that so many people take him Sam Harris seriously. I wrote my essay because I think his views are not worth taking seriously, and I think there is a serious problem with the way so many people follow him.
By the way, my essay was not entered into the competition, because I didn’t learn of the competition until after the deadline. But I wrote one anyway."

Here's how one commenter, GBJames, responded to that:

"Wait… Other people were only motivated by the hope of winning $2000 but you wrote a response without any possibility of a cash reward?

Sounds to me that you take his views seriously even if you don’t like them."

Note the mischaracterization:  I did not say that anybody was "only motivated by the hope of winning" money.  Also note the fact that GBJames jumped to a personal conclusion about me that explicitly contradicts the views I expressed.  That is neither charitable nor friendly.  In my response to GBJames, I did not point any of that out.  Instead, I merely explained that I did, in fact, hope to gain some monetary reward when I wrote my essay.  I wrote:

"Actually, part of me did hope that my essay would still be considered for some monetary reward–I even emailed Russell just to see if there was a chancen–but mainly, I hoped (and still hope) my essay would help people see through the bad arguments that pass for “informed” philosophy in places such as this."
Here's how GBJames responded to that comment:
"Then why not assume that other people who responded were motivated by a similar desire to make what they think is a good case to convince others of their view?"
I then pointed out that GBJames had misunderstood me.  If you reread my initial comment, I did say that we should presume people were interested in changing Sam Harris' mind.  It is obvious that anybody who responded to the Challenge was trying to change minds, and nothing I wrote implies otherwise.  Yet, GBJames continued to mischaracterize my position in an uncharitable way.  Eventually, he wrote this:

"When you demean the arguments of others by saying they are simply motivated by money (compared to your own presumably noble motives) you poison the well. Similar to telling others that they aren’t thinking carefully enough. Such comments provoke the kind of response that the roolz prohibit. I’ll disengage now."
 Now he not only repeats the same misrepresentation of my view, but claims I am poisoning the well.  That is nonsense.  If I were poisoning the well, that would mean I was trying to argue against a position by discrediting the source.  But I was not arguing against the people who responded to Sam Harris' essay, nor was I trying to discredit anyone as a reliable source.  The accusation of "poisoning the well" is ridiculous and shows a clear lack of careful thought.  In addition, GBJames claims that I was trying to put myself above the other people who responded to Harris' Challenge, even though I had already explained that I was, in fact, interested in a monetary reward.  That is simply careless.  How ironic that this all comes at the same time GBJames gets all high and mighty because I accused him of not thinking carefully enough about what I had been saying.

In sum, GBJames was arrogant, foolish, uncharitable and unfriendly, and stubbornly misrepresented my views, displaying a lack of careful thought.  Besides reasonshark (whose confusion and misunderstanding I have already documented), GBJames is the only person who expressed any displeasure at my posts before Coyne told me to leave his blog.  If that is the kind of poster that Coyne prefers to keep around, I'm happy to stay out of his playground.

What I think should be obvious is that I was not kicked off his blog for being rude or arrogant, or for breaking any rules.  I was kicked off because Jerry Coyne does not like what I have to say.

Monday, June 16, 2014

Response to Shuggy: On Morality and Personhood

There's one more post I want to respond to from Jerry Coyne's blog.  Shuggy writes:

"So what are the alternatives to well-being as a goal of ethics? I see “virtue” but how would virtue be defined without involving well-being? Isn’t the point of not raping, to quote a recent example, to maximise the well-being of those not raped?"

It's a good question:  What could be the goal of ethics, if not the maximization of well-being?  I assume the question is, what is the point of morality?  Why do people make ethical judgments at all, if not to promote well-being?

Of course, even if I cannot give a persuasive answer, it doesn't mean no answer can be given.  We should not assume that the goal of morality is to maximize well-being just because we cannot think of a different one.   That would be argumentum ad ignorantiam.   However, I do have an answer.

If we were going to approach this scientifically, we might consider an evolutionary perspective.  How might moral judgments be adaptive?  How might they increase our chances of successful reproduction?

It might be that moral judgments help solidify community bonds, establishing complex forms of reputability and trust.  The goal of ethics, then, might be to strengthen social interaction.  The goal, in that case, is not to maximize the well-being of all or even most conscious creatures.  It is not even to maximize human flourishing.  It is to maximize the chances of successful reproduction.  It may just be that moral judgments improve the chances of successful reproduction by promoting suffering.  Throughout history and even today, many moral judgments lead to suffering.  From an evolutionary point of view, this need not be considered a mistake.

I think of dignity as the fundamental moral property.  Morality, as a sociobiological process, is all about fostering dignity.  But there is no fact of the matter about how dignity can or should be fostered.  Dignity is not quantifiable.  Dignity is the property of having moral excellence, where moral excellence is a matter of social value.  It is normative, a matter of what is or is not considered just.  There is no fact of the matter about what is or is not just.  There are various reasons, and these can be agreed upon or not, but there is no metric for determining which are correct and which are incorrect.  There is no such thing as an incorrect moral judgment.  Dignity is not the sort of thing that can be scientifically measured.

You might say dignity is therefore an illusion.  If that is so, then so are the concepts of earning a living, deserving fair treatment, and being guilty or innocent of a crime.  These are concepts we live by.  You can call them illusions if you want.  You can, like the moral error theorists, claim that all moral judgments are false, and that all thought of dignity is an error.  I, however, prefer noncognitivism.  This is the view that dignity is real, but judgments about it are not truth-evaluable.  We can't talk about the elements of personhood--dignity, guilt, rewards, etc.--and expect anything to make our judgments true or false, but we should still take such talk seriously.  There is no conceivable alternative.

Sunday, June 15, 2014

Response to reasonshark

Since I'm not allowed to respond to reasonshark on Jerry Coyne's blog, and since reasonshark seems confused about my position and arguments, I'm posting a response here.  (I can't find any contact info for reasonshark, so if you know him/her, please get his/her attention for me).

I attempted to demonstrate what is wrong with the way Sam Harris interprets the word "should."  He says that it is synonymous with maximizing the well-being of conscious creatures.  So, "I should x" means the same thing as "If I x, it will maximize the well-being of conscious creatures."  This is how Sam Harris attempts to overcome the is/ought distinction (aka "the fact/value distinction").  According to him, moral judgments about what we should or ought to do are just statements about what will maximize the well-being of conscious creatures.

In my response to Harris' Moral Landscape Challenge, I presented a formal argument for the falsity of Harris' view.  My argument there is that we can conjoin clauses about maximizing well-being, but not "ought" statements.  For example, we can say, "Doing x will maximize well-being and doing y instead of x will also maximize well-being."  Yet, we cannot say, "We should x and we should also y instead of x."  This suggests that "should" does not mean what Harris thinks it does.

Consider paradigm cases of how we use the word "should":

(1)  We should bring an umbrella because it is going to rian.
(2)  You shouldn't eat too much.  You'll get sick.
(3)  What should we do if it rains?

In all of these cases, the word "should" indicates that a justifying reason is called for.  In cases (1) and (2), the justifying reason is given as a way of answering the question raised by the "should."  Why should we bring an umbrella?  In other words, what reason would justify the bringing of an umbrella?  The justifying reason is that it is going to rain.  In case (3), the question is not asking for a justifying reason, but is asking for a course of action which is presumed to be justified by some reason.

With that in mind, consider the scenarios I presented:

Imagine this conversation:

Carol: We should do x, y and z.
Lucy: Why?
Carol: Well, if we do x, y and z, it will lead to happiness. It will maximize well-being.
Lucy: Oh, you’re right. Okay!

That’s a simple, common-sensical conversation, right? Nothing wrong with it.

Now imagine a world where people spoke the way Sam Harris thinks they do:

Carol: We should do x, y and z.
Lucy: Why?
Carol: Well, if we do x, y and z, it will lead to happiness. It will maximize well-being.
Lucy: I know what “should” means, Carol. Geez. Why are you lecturing me on the definition of “should.” I’m not a child. I’m asking for a REASON!

See, if Sam Harris were right, then you could never appeal to the maximization of well-being as a REASON for doing anything. You could never say you should x BECAUSE IT MAXIMIZED WELL-BEING. You’d have to give some other reason. But what sort of reason could you give?

The point of my question is this:  If Harris is correct about morality, then you could not possibly give a valid reason for why we should maximize the well-being of conscious creatures.  The very idea of trying to justify the maximization of well-being would be incoherent.  Of course people might they they were giving reasons for maximizing well-being, but they would be very confused.

Now on to reasonshark's response.  On the one hand, reasonshark thinks that linguistic analysis is not going to be of any help.  And yet, Sam Harris is making an argument about language.  He is making a claim about what the word "should" means. I think Harris is wrong.  Why shouldn't I think that an argument to that effect would be fruitful?  Is there something about the methodology of linguistic analysis that makes it unreliable?  Not that I am aware of.  

reasonshark said, "You seemed to be saying that, if Harris’ point about “good=maximizing well-being” is correct, it would be obvious to anyone that they were synonyms (hence the “I know…” bit in your second example)."

No, that was not my point.  My claim was that, if Harris is correct, the second sort of conversation between Carol and Lucy would be plausible.  Not inevitable, but plausible.  It would not seem odd.  And yet, it does seem very odd.

What if I said we should not maximize the well-being of all conscious creatures?  Would that mean that maximizing the well-being of all conscious creatures does not maximize the well-being of all conscious creatures?  That is absurd, but it is what Harris would have us believe.  I, in contrast, think it means this:  There are justifying reasons for not maximizing the well-being of all conscious creatures.  And I think that's common sense.

As it happens, I'm a moral noncognitivists, which means I don't think justifications of reasons have truth conditions.  I don't think there's any fact of the matter about whether or not we should maximize the well-being of all conscious creatures.  That doesn't mean there aren't reasons.  There can be reasons for and against the maximization of well-being, but justification of those reasons is not truth-evaluable.

reasonshark also says, "Any particular ethics, principally the normative kind (as I indicated in the post you replied to), presumes a metaethical theory to begin with, otherwise it’s empty." 

That is false.  No ethical system requires choosing between moral realism or anti-realism, for example.

reasonshark continues:  "Your question ["Why should I maximize well-being?"] is not the same regardless of which meaning you pick. If you were asking from normative grounds, then you are committing to a rival metaethical theory, however loosely, but you can’t challenge a metaethical theory by asking for a norm, because the norms are supposed to derive from the metaethics, so you’re supposed to ask in terms of another, rival, metaethical theory, an equal. The challenge otherwise makes no sense."

Norms are supposed to derive from metaethics?  How is that?  I think reasonshark is confused about the relationship between ethics and metaethics.

The point I made is this:  The meaning of "should" is the same regardless of your ethical or metaethical views.  The meaning of the question, "Why should I maximize well-being?", does not depend on your ethical or metaethical views.  It doesn't matter if you are a moral realist or a moral anti-realist, you should still agree that the question is asking for a justifying reason for maximizing well-being.  It doesn't matter if you're a deonotlogist, consequentialist, or virtue ethicist, either.  If anyone disagrees, feel free to explain.  Why should it matter if you're a moral realist or anti-realist?  Why should it matter what approach to ethics you favor?

Here is further evidence that reasonshark is confused about the difference between ethics and metaethics:  "metaethical grounds (challenging Harris’ claim to have solved the issue of what goodness is, principally)"

So, according to reasonshark, a metaethical ground is one which answers the question, "what is goodness?"  And yet, according to the field of philosophy, that question is a question for normative ethics, not metaethics.

It is ironic that I am accused of confusing ethics and metaethics by somebody who cannot tell the difference between them.

Not to go off on a tangent, but . . . Perhaps you can see why it is hard to be patient when trying to defend the practice of philosophy on Jerry Coyne's blog.  

Now, it seems reasonshark is also confused about my argument.  Apparently, reasonshark thinks that I am asking for some reason to maximize well-being.  reasonshark writes:  "it’s not clear what you mean by asking for a reason. Are you looking for a real-world explanation of how goodness and badness arise from otherwise morally neutral physics, or an appeal to your self-interest?"

I was NOT asking for a reason.  I was pointing out that Harris's view does not align with a common sense view of the language.

reasonshark: "You were criticizing Harris’ metaethical theory with a thought experiment that treated it like a normative claim, that isn’t solid as a metaethical critique, and all while presupposing that the way we use language is a valid critique of the “goodness”=”well-being” idea."

No, I was not critiqing the "goodness"="well-being" idea in my conversation with reasonshark.  I was critiquing Harris' claim about the meaning of the word "should."  That should have been obvious.  (I.e., there are justifying reasons to think that it was obvious.  Not that its having been obvious would somehow maximize the well-being of conscious creatures.)

At this point, I have to wonder why I'm bothering to reply at such length to reasonshark.  But there's one more error that needs to be corrected.  reasonshark says: "I don’t agree that Harris is correct when he says Dennett is trying to change the subject."

I did NOT say that Harris was correct in accusing Dennett of changing the subject.  I just pointed out that Harris has, in fact, accused Dennett of changing the subject.

As a final note, reasonshark accused me of making a personal indictment when such was not my intention.  reasonshark criticized the idea that linguistic analysis could be of use.  I (somewhat rudely) claimed that my linguistic point seems appropriate.  The rudeness came from the fact that I do not always have the patience to be nice to people who dismiss legitimate philosophical tools on a Website that is notorious for dismissing the practice of philosophy in general--especially when they don't have a problem with the use of those same tools by people they favor, like Sam Harris.  So, if I offended you, reasonshark, it is because I think you should be more careful in your criticisms of conceptual analysis.

I've Offended Jerry Coyne

He told me not to post on his Website anymore and said I should not call him "Coyne."   Check it out:  LINK.  I guess the use of last names is a touchy subject for some people.  I guess "Dr. Coyne" would have been more appropriate.  "Jerry" is surely too familiar.  Whatever.  I'm sure that's not why I've been kicked off his blog.  He suggested it's because I'm arrogant.  But when I posted again (not having had time to see his initial comment telling me to leave his Website), he suggested that I'm too interested in drawing attention to my response to Sam Harris' Moral Landscape Challenge, though that challenge was the subject of the thread.  And with that, he revoked my posting privileges.

My tone did rub one or two people the wrong way.  I admit, I was not at my most patient or humble.  I wouldn't say I was arrogant, exactly.  I was confident and aggressive, yes, and a little rude, but that's about it.  And I'm not sure my tone was entirely uncalled for.  Nor would I say I led the discussion in unseemly directions.  And yet, I was booted without even being given a warning.  I suspect that if I had exhibited the same tone and made similarly strong posts with arguments that were more congenial to his position, I would not have been booted.  I might have been given a warning, but that's it.  I wish I could say I'm shocked or even surprised, but I'm not.

Saturday, May 10, 2014

Neil deGrasse Tyson and Philosophy

Neil deGrasse Tyson has again offended professional philosophers, and I again have something to say to him about it.

Dear Mr. Tyson,
Like others here, I am also grateful that you have taken the time to participate in this discussion. I am most intrigued by your recent comment about philosophical contributions made by physicsts in the 20th century. Could you mention some of the more important philosophical contributions you are talking about? What makes something a *philosophical* contribution, in your view?
Of course, professional philosophers sometimes disagree amongst themselves about what constitutes a significant philosophical contribution. One of the issues in the discipline is the variety of attitudes and expectations of philosophy itself. I am sure I am not the only one who would like to know your views on the topic. In fact, I imagine the main problem many philosophers have with your recent remarks is that you seem to be taking an authoritative attitude towards what counts as a meaningful or important philosophical contribution, when your formal training is not in philosophy at all. You are, in that sense, stepping on some academic toes. Perhaps feelings will be less hurt, toes less bruised, if you could explain to us just what sorts of philosophical contributions have been made by physicists–or, even better, what sorts of philosophical problems you think physicsts should be interested in.
Jason Streitfeld
P.S. In the link you provided, when you responded to a question about cutting funding for philosophy departments, you seem to basically be saying that philosophers are essentially “wanna-be physicists.” You then go on to say that there are plenty of other ways that philosophers can make contributions, but the characterization is still rather condescending. You’re clearly talking about philosopher in the last couple of centuries, and not throughout history. But I still wonder, which philosophers do you think were wanna-be physicists? Do you think they were misguided, focusing their attention on empirical questions when they should have been focusing instead on properly philosophical concerns?
In that same link, you say that you are disappointed by the fact that so much brain power has been taken away from the physical sciences. The implication is that, even though you recognize that there is plenty of *philosophical* work to be done, you think the scientific work is more important. You would be happier if philosophers were trained in physics instead of philosophy. Pardon my speculation, but it seems you might be a little at odds with yourself. On the one hand, you want to applaud and embrace all of the important work philosophers do; on the other hand, you’d be happier if they weren’t doing it at all. Is that about right?