Let me try addressing your comment more bluntly to see if that helps.
Your complaint about Klurl’s examples are that they are “coincidentally” drawn from the special class of examples that we already know are actually real, which makes them not fictional.
No, Klurl is not real. There are no robot aliens seeding our planet. The fictional evidence I was talking about was not that Earth right now exists in reality right now, it was that Earth right now exists in this story specifically at the point it was used.
If you write a story where a person prays and then wins the lottery as part of a demonstration of the efficacy of prayer, that is fictional evidence even though prayer and winning lotteries are both real things.
If you think that the way the story played out was misleading, that seems like a disagreement about reality, not a disagreement about how stories should be used.
No, I really am claiming that this was a misuse of the story format. I am not opposed to it because it’s not reality. I am opposed to it because the format portends that the outcomes are illustrations of the arguments, but in this case the outcomes were deceptive illustrations.
If Trapaucius had arrived at the planet to find Star Trek technology and been immediately beamed into a holding cell, would that somehow have been less of a cheat, because it wasn’t real?
It would be less of a cheat in the sense that it would give less of a false impression that the arguments were highly localizing, and in that it would be more obvious that the outcome was fanciful and not to be taken as a serious projection. But it would not be less of a cheat simply in the sense that it wasn’t real, because my claim was never that this was cheating for using a real outcome.
If you write a story where a person prays and then wins the lottery as part of a demonstration of the efficacy of prayer, that is fictional evidence even though prayer and winning lotteries are both real things.
In your example, it seems to me that the cheat is specifically that the story presents an outcome that would (legitimately!) be evidence of its intended conclusion IF that outcome were representative of reality, but in fact most real-life outcomes would have supported the conclusion much less than that. (i.e. there are many more people who pray and then fail to win the lottery, than there are people who pray and then do win.)
If you read a story where someone tried and failed to build a wooden table, then attended a woodworking class, then tried again to build a table and succeeded, I think you would probably consider that a fair story. Real life includes some people who attend woodworking classes and then still can’t build a table when they’re done, but the story’s outcome is reasonably representative, and therefore it’s fair.
Notice that, in judging one of these fair and the other unfair, I am relying on a world-model that says that one (class of) outcome is common in reality and the other is rare in reality. Hypothetically, someone could disagree about the fairness of these stories based only on having a different world-model, while using the same rules about what sorts of stories are fair. (Maybe they think most woodworking classes are crap and hardly anyone gains useful skills from them.)
But I do not think a rare outcome is automatically unfair. If a story wants to demonstrate that wishing on a star doesn’t work by showing someone who needs a royal flush, wishes on a star, then draws a full house (thereby losing), the full house is an unlikely outcome, but since it’s unlikely in a way that doesn’t support the story’s aesop, it’s not being used as a cheat. (In fact, notice that every exact set of 5 cards they might have drawn was unlikely.)
If your concern is that Klurl and Trapaucius encountered a planet that was especially bad for them in a way that makes their situation seem far more dangerous than was statistically justified based on the setup, then I think Eliezer probably disagrees with you about the probability distribution that was statistically justified based on the setup.
If, instead, your concern is that the correspondence between Klurl’s hypothetical examples and what they found when reaching the planet was improbably high, then I agree that is very coincidental, but I do not think that coincidence is being used as support for the story’s intended lessons. The story is not trying to convince you that Klurl can narrowly predict exactly what they’ll find, and in fact Klurl denies this several times.
The coincidence could perhaps cause some readers to conclude a high degree of predictability anyway, despite lack of intent. I’d consider that a bad outcome, and my model of Eliezer also considers that a bad outcome. I’m not sure there was a good way to mitigate that risk without some downside of equal or greater severity, though. I think there’s pedagogical value in pointing out a counter-example that is familiar to the reader at the time the argument is being made, and I don’t think any simple change to the story would allow this to happen without it being an unlikely coincidence.
To the first part: yes, of course, my claim isn’t that anything here is axiomatically unfair. It absolutely depends on the credences you give for different things, and the context you interpret them in. But I don’t think the story in practice is justified.
If, instead, your concern is that the correspondence between Klurl’s hypothetical examples and what they found when reaching the planet was improbably high, then I agree that is very coincidental, but I do not think that coincidence is being used as support for the story’s intended lessons.
This is indeed approximately the source of my concern.
I think in a story like this if you show someone rapidly making narrow predictions and then repeatedly highlight how much more reasonable they are than their opponent as a transparent allegory for your narrow predictions being more reasonable than a particular bad opposing position from a post signposted as nonfiction inside a fictional frame, there really is no reasonable room to claim that actually people weren’t meant to read things into the outcomes being predicted. Klurl wasn’t merely making hypothetical examples, he was acting on specific predictions. It is actually germaine to the story and bad to sleight-of-hand away that Klurl was often doing no intellectual work. It is actually germaine to the story whether some of Trapaucius’ arguments have nonzero Baeysean weight.
The claim that no simple change would have solved this issue seems like a failure of imagination, and anyway the story wasn’t handed down to its author in stone. One could just write a less wrong story instead.
I don’t think Eliezer’s actual real-life predictions are narrow in anything like the way Klurl’s coincidentally-correct examples were narrow.
Also, Klurl acknowledges several times that Trapaucius’ arguments do have non-zero weight, just nothing close to the weight they’d need to overcome the baseline improbability of such a narrow target.
Let me try addressing your comment more bluntly to see if that helps.
No, Klurl is not real. There are no robot aliens seeding our planet. The fictional evidence I was talking about was not that Earth right now exists in reality right now, it was that Earth right now exists in this story specifically at the point it was used.
If you write a story where a person prays and then wins the lottery as part of a demonstration of the efficacy of prayer, that is fictional evidence even though prayer and winning lotteries are both real things.
No, I really am claiming that this was a misuse of the story format. I am not opposed to it because it’s not reality. I am opposed to it because the format portends that the outcomes are illustrations of the arguments, but in this case the outcomes were deceptive illustrations.
It would be less of a cheat in the sense that it would give less of a false impression that the arguments were highly localizing, and in that it would be more obvious that the outcome was fanciful and not to be taken as a serious projection. But it would not be less of a cheat simply in the sense that it wasn’t real, because my claim was never that this was cheating for using a real outcome.
Thank you for being more explicit.
In your example, it seems to me that the cheat is specifically that the story presents an outcome that would (legitimately!) be evidence of its intended conclusion IF that outcome were representative of reality, but in fact most real-life outcomes would have supported the conclusion much less than that. (i.e. there are many more people who pray and then fail to win the lottery, than there are people who pray and then do win.)
If you read a story where someone tried and failed to build a wooden table, then attended a woodworking class, then tried again to build a table and succeeded, I think you would probably consider that a fair story. Real life includes some people who attend woodworking classes and then still can’t build a table when they’re done, but the story’s outcome is reasonably representative, and therefore it’s fair.
Notice that, in judging one of these fair and the other unfair, I am relying on a world-model that says that one (class of) outcome is common in reality and the other is rare in reality. Hypothetically, someone could disagree about the fairness of these stories based only on having a different world-model, while using the same rules about what sorts of stories are fair. (Maybe they think most woodworking classes are crap and hardly anyone gains useful skills from them.)
But I do not think a rare outcome is automatically unfair. If a story wants to demonstrate that wishing on a star doesn’t work by showing someone who needs a royal flush, wishes on a star, then draws a full house (thereby losing), the full house is an unlikely outcome, but since it’s unlikely in a way that doesn’t support the story’s aesop, it’s not being used as a cheat. (In fact, notice that every exact set of 5 cards they might have drawn was unlikely.)
If your concern is that Klurl and Trapaucius encountered a planet that was especially bad for them in a way that makes their situation seem far more dangerous than was statistically justified based on the setup, then I think Eliezer probably disagrees with you about the probability distribution that was statistically justified based on the setup.
If, instead, your concern is that the correspondence between Klurl’s hypothetical examples and what they found when reaching the planet was improbably high, then I agree that is very coincidental, but I do not think that coincidence is being used as support for the story’s intended lessons. The story is not trying to convince you that Klurl can narrowly predict exactly what they’ll find, and in fact Klurl denies this several times.
The coincidence could perhaps cause some readers to conclude a high degree of predictability anyway, despite lack of intent. I’d consider that a bad outcome, and my model of Eliezer also considers that a bad outcome. I’m not sure there was a good way to mitigate that risk without some downside of equal or greater severity, though. I think there’s pedagogical value in pointing out a counter-example that is familiar to the reader at the time the argument is being made, and I don’t think any simple change to the story would allow this to happen without it being an unlikely coincidence.
To the first part: yes, of course, my claim isn’t that anything here is axiomatically unfair. It absolutely depends on the credences you give for different things, and the context you interpret them in. But I don’t think the story in practice is justified.
This is indeed approximately the source of my concern.
I think in a story like this if you show someone rapidly making narrow predictions and then repeatedly highlight how much more reasonable they are than their opponent as a transparent allegory for your narrow predictions being more reasonable than a particular bad opposing position from a post signposted as nonfiction inside a fictional frame, there really is no reasonable room to claim that actually people weren’t meant to read things into the outcomes being predicted. Klurl wasn’t merely making hypothetical examples, he was acting on specific predictions. It is actually germaine to the story and bad to sleight-of-hand away that Klurl was often doing no intellectual work. It is actually germaine to the story whether some of Trapaucius’ arguments have nonzero Baeysean weight.
The claim that no simple change would have solved this issue seems like a failure of imagination, and anyway the story wasn’t handed down to its author in stone. One could just write a less wrong story instead.
I don’t think Eliezer’s actual real-life predictions are narrow in anything like the way Klurl’s coincidentally-correct examples were narrow.
Also, Klurl acknowledges several times that Trapaucius’ arguments do have non-zero weight, just nothing close to the weight they’d need to overcome the baseline improbability of such a narrow target.