Sunday, March 06, 2005

It Itches, I'm Tellin' Ya...

Certain Doubts has an interesting post on Keith Lehrer's Pain/Itch Example. The idea is that it's supposed to show we can be "mistaken even about the contents of our own minds." The example, taken from Jonathan Kvanvig's post at Certain Doubts, goes like this:

You go to the doctor complaining of an itch. He listens to your complaint, observes the location of the itch, writes down how the problem started, and the details about physical symptoms including duration and intensity of the experience. Then he tells you that he thinks he knows what the problem is. He tells you that it’s not really an itch, but a pain. People have confused these two in the past, but we now have a well-confirmed theory that distinguishes the two in a slightly different way than “the folk” do. The theory has led to two technologies. One is a machine for distinguishing the two underlying states, and the other is medicine for treating the two conditions. Your doctor tells you that one of the medicines will solve the problem if you’re experiencing a pain, but not the itch; and the other medicine will have the alternative results. You insist that you’re experiencing an itch, but he uses the machine and shows you the results: you’re in pain, it says. If you still insist, he’ll give you the itch medicine. You do, and he does; you return two weeks later, still suffering, and ask for the pain medicine. You take it and get well. So you say, “I guess I was wrong. It was a pain, not an itch, after all!”


Like several respondents I was left unimpressed (see Jonah Shupbach's good criticism here. Why should I think that this machine the doctor (or whomever) built is in a better position to judge my internal states than I am? Even if the machine can tell the difference between the two sensations why does that mean that I am in pain?

Beyond that, it seems to me that Lehrer has made some unwarranted assumptions in formulating this example. In situations like this I prefer to tread lightly because this example was formulated by an intelligent man; so I offer the caveat that I might be missing something. At any rate, Lehrer designed the example to show how we might be mistaken about the contents of our own minds. He first needs to prove, I think that such a thing is possible in such a clear case when our mind is not clouded in some way (perhaps he does in some other article which I have not read, but I tend to doubt it). He assumes it is, and I think this is not warranted. If the example is intended as proof that it is possible then it is circular because it assumes the possibility at the outset of the example (this is why I doubt it was meant as such a proof).

I can build a similar example on why 2+2 acutally equals 5 rather than 4, but I wouldn't expect people to believe that either. Imagine you are in a math class and your professor put a giant red 'X' next to your answer of '4' to the problem '2+2=?'. You challenge her on this and she says, "Actually it is 5. Humans only think it's 4 because of a trick of the brain. We ran it through a sophisticated Math program invented by researches from MIT and the answer came out to be '5'. We repeated the calculations numerous times and have come to the conclusion that the way the 'folk' add two and two is slightly off." So off you go to check on this math program. You purchase it and have some software experts analyze it and they assure you it's perfect. So you conclude that 2+2 does equal '5'.

Of course, just because I build an example of how 2+2 could equal five doesn't mean that it's possible that it could happen. I would have to prove its possibility before I even tried to move on to how it could be done. Similarly, for Lehrer's example to work you would have to assume the possibility of our being wrong about our mental states from the outset. Any comments?

No comments: