I think it would be great to start with a theory that sounds very scientific, but is unfalsifiable, and therefore useless. Then we modify the theory to include an element that is falisfiable, and the theory becomes much more useful.
For example, we have a new kind of medicine, and it is very good for some people, but when other people take the medicine it kills them. Naturally, we want to know who would be killed by the medicine, and who would be helped by it.
A scientist has a theory. He believes there is a gene that he calls the “Spottiswood gene”. Anyone who has the proper form of the Spottiswood gene will be safe, they can take the medicine freely. But some people have a broken version of the Spottiswood gene, and they die when then they take the medicine. Unfortunately the scientist has no way of detecting the Spottiswood gene, so he can’t tell you whether you have the gene or not.
Now this theory sounds very scientific and it’s got lots of scientific words in it, but it isn’t very useful. The scientist doesn’t know how to detect the gene, so he can’t tell you whether you are going to live or whether you are going to die. He can’t tell you whether it is safe to take the medicine. If you take the pill and you survive, then the scientist will say that you had the working version of the gene. If you take the pill and you die, the scientist will say that you have the broken version of the gene. But he cannot say what will happen to you until after it has already happened, so his theory is useless. He can explain anything, but he can’t make predictions in advance.
Now another scientist has a different theory. She thinks that the medicine is related to eye color. She thinks anyone with blue eyes will die if they take the medicine, and she thinks that anyone with brown eyes will be okay. She’s not sure why this happens, but she plans to do more research and find out. Even if she doesn’t do any more research, her theory is much more useful than than the first scientist’s theory. If she’s right, then blue-eyed people will know that they should avoid the medicine, and brown eyed people will know that they can take the medicine safely. She has made predictions. She predicts that no brown eyed person will die after taking the medicine, and she predicts that no blue eyed person will live.
Of course, the second scientist might be wrong. But the interesting thing is that if she’s wrong, then we can prove that she’s wrong. She predicted that no one with brown eyes will die after taking the medicine, so if lots of people with brown eyes die, then we will know that she’s wrong.
If her theory is wrong, then we should be able to prove that it’s wrong. And then if the results don’t prove that she’s wrong, we accept that she’s probably right. That’s called falsifiability.
But the first scientist doesn’t have falsifiability. We know that even If he’s wrong, we’ll never be able to prove it—and that means we’ll never know if he’s wrong or right. More importantly, even he is right, his theory still wouldn’t do anybody any good.
I can’t speak for Eliezer’s intentions when he wrote this story, but I can see an incredibly simple moral to take away from this. And I can’t shake the feeling that most of the commenters have completely missed the point.
For me, the striking part of this story is that the Jester is shocked and confused when they drag him away. “How?!” He says “It’s logically impossible”. The Jester seems not to understand how it is possible for the dagger to be in the second box. My explanation goes as follows, and I think I’m just paraphrasing the king here.
1- If a king has two boxes and a means to write on them, then he can write any damn thing on them that he wants to. 2- If a king also has a dagger, then he can place that dagger inside one of the two boxes, and he can place it in whichever box he decides to place it in.
That’s it. That’s the entire explanation for how the dagger could “possibly” be inside the second box. It’s a very simple argument, that a five year old could understand, and no amount of detailed consideration by a logician is going to stop this simple argument from being true.
The jester, however, thought it was impossible for the dagger to be in the second box. Not just that it wasn’t there, but that it was IMPOSSIBLE. That’s how I read the story, anyway. He used significantly more complicated logic, and he thought that he’d proven it impossible. But it only takes a moment’s reflection to see that he’s wrong.
Some of the comments above have tried to work out what was wrong with Jester’s logic, and they’ve explained the detailed and subtle flaws in his reasoning. That’s great—if you want to develop a deep understanding of logic, self-referential statements, and mathematical truth values (and lets be fair, I suppose most of us do), but in the context of the sequences on rationality, I think there’s a much better lesson to learn.
Remember: rationalists are supposed to WIN. We’re supposed to develop reasoning skills that give us a better and more useful understanding of reality. So the lesson is this: don’t be seduced by complex and detailed logic, if that logic is taking you further and further away from an accurate description of reality. If something is already true, or already false, then no amount of reasoning will change it.
Reality is NOT required to conform to your understanding or your reasoning. It is your reasoning that should be required to conform to reality.