To the first part: yes, of course, my claim isn’t that anything here is axiomatically unfair. It absolutely depends on the credences you give for different things, and the context you interpret them in. But I don’t think the story in practice is justified.
If, instead, your concern is that the correspondence between Klurl’s hypothetical examples and what they found when reaching the planet was improbably high, then I agree that is very coincidental, but I do not think that coincidence is being used as support for the story’s intended lessons.
This is indeed approximately the source of my concern.
I think in a story like this if you show someone rapidly making narrow predictions and then repeatedly highlight how much more reasonable they are than their opponent as a transparent allegory for your narrow predictions being more reasonable than a particular bad opposing position from a post signposted as nonfiction inside a fictional frame, there really is no reasonable room to claim that actually people weren’t meant to read things into the outcomes being predicted. Klurl wasn’t merely making hypothetical examples, he was acting on specific predictions. It is actually germaine to the story and bad to sleight-of-hand away that Klurl was often doing no intellectual work. It is actually germaine to the story whether some of Trapaucius’ arguments have nonzero Baeysean weight.
The claim that no simple change would have solved this issue seems like a failure of imagination, and anyway the story wasn’t handed down to its author in stone. One could just write a less wrong story instead.
I don’t think Eliezer’s actual real-life predictions are narrow in anything like the way Klurl’s coincidentally-correct examples were narrow.
Also, Klurl acknowledges several times that Trapaucius’ arguments do have non-zero weight, just nothing close to the weight they’d need to overcome the baseline improbability of such a narrow target.
To the first part: yes, of course, my claim isn’t that anything here is axiomatically unfair. It absolutely depends on the credences you give for different things, and the context you interpret them in. But I don’t think the story in practice is justified.
This is indeed approximately the source of my concern.
I think in a story like this if you show someone rapidly making narrow predictions and then repeatedly highlight how much more reasonable they are than their opponent as a transparent allegory for your narrow predictions being more reasonable than a particular bad opposing position from a post signposted as nonfiction inside a fictional frame, there really is no reasonable room to claim that actually people weren’t meant to read things into the outcomes being predicted. Klurl wasn’t merely making hypothetical examples, he was acting on specific predictions. It is actually germaine to the story and bad to sleight-of-hand away that Klurl was often doing no intellectual work. It is actually germaine to the story whether some of Trapaucius’ arguments have nonzero Baeysean weight.
The claim that no simple change would have solved this issue seems like a failure of imagination, and anyway the story wasn’t handed down to its author in stone. One could just write a less wrong story instead.
I don’t think Eliezer’s actual real-life predictions are narrow in anything like the way Klurl’s coincidentally-correct examples were narrow.
Also, Klurl acknowledges several times that Trapaucius’ arguments do have non-zero weight, just nothing close to the weight they’d need to overcome the baseline improbability of such a narrow target.