That’s part of the frustrating thing—there are many parts which do look exactly like the fallacy of grey (thanks for reminding me of the name, I simply couldn’t remember it) and he seems to recognize it a bit in some of the later parts like where he describes how a defender of Bostrom might point out that the goal of the fable was to motivate us to eliminate one particularly bad dragon.
But he also took pains to explicitly state at one point his concern with fundamental limits, so anyone who looked at just the abstract or just (all the many) parts that looked like fallacy of grey could instantly be smacked down as ‘you clearly did not read my paper carefully, because I am not concerned with the transhumanists’ incremental improvements but with the final goal of perfection’.
The paper is muddled enough that I don’t think this was deliberate, but it does impress me a little bit.
Annoyance was the feeling I got, as well. It seems to me that in the places he does not commit the fallacy of grey, he only restates limits that any LW-style transhumanist understands—ie, in an EM scenario without a friendly singleton, there will still be disease, injuries, and death; even given a friendly singleton, with meaningful “continuous improvement” we only get about 28,000 subjective years until the heat death of the universe, etc.
That’s part of the frustrating thing—there are many parts which do look exactly like the fallacy of grey (thanks for reminding me of the name, I simply couldn’t remember it) and he seems to recognize it a bit in some of the later parts like where he describes how a defender of Bostrom might point out that the goal of the fable was to motivate us to eliminate one particularly bad dragon.
But he also took pains to explicitly state at one point his concern with fundamental limits, so anyone who looked at just the abstract or just (all the many) parts that looked like fallacy of grey could instantly be smacked down as ‘you clearly did not read my paper carefully, because I am not concerned with the transhumanists’ incremental improvements but with the final goal of perfection’.
The paper is muddled enough that I don’t think this was deliberate, but it does impress me a little bit.
Annoyance was the feeling I got, as well. It seems to me that in the places he does not commit the fallacy of grey, he only restates limits that any LW-style transhumanist understands—ie, in an EM scenario without a friendly singleton, there will still be disease, injuries, and death; even given a friendly singleton, with meaningful “continuous improvement” we only get about 28,000 subjective years until the heat death of the universe, etc.