I agree, but I wonder if my confidence in my extrapolation agreeing is greater or less than your confidence in my agreeing was. I tend to claim very much greater than typical agnosticism about the subjective nature of nearby (in an absolute sense) mind-space. I bet a superintelligence could remove my leg without my noticing and I’m curious as to the general layout of the space of ways in which it could remove my leg and have me scream and express horror or agony at my leg’s loss without my noticing.
I really do think that at a best guess, according to my extrapolated values, human suffering outweights that of the rest of the biosphere, most likely by a large ratio (best guess might be between one and two orders of magnitude). Much more importantly, at a best guess, human ‘unachieved but reasonably achievable without superintelligence flourishing’ outweighs the animal analog by many orders of magnitude, and if the two can be put on a common scale I wouldn’t be surprised if the former is a MUCH bigger problem than suffering. I also wouldn’t be shocked if the majority of total suffering in basically Earth-like worlds (and thus the largest source of expected suffering given our epistemic state) comes from something utterly stupid, such as people happening to take up the factory farming of some species which happens, for no particularly good reason, to be freakishly capable of suffering. Sensitivity to long tails tends to be a dominant feature of serious expected utility calculus given my current set of heuristics. The modal dis-value I might put on a pig living its life in a factory farm is under half the median which is under half the mean.
I agree with this point, and I’d bet karma at better than even odds that so does Michael Vassar.
I agree, but I wonder if my confidence in my extrapolation agreeing is greater or less than your confidence in my agreeing was. I tend to claim very much greater than typical agnosticism about the subjective nature of nearby (in an absolute sense) mind-space. I bet a superintelligence could remove my leg without my noticing and I’m curious as to the general layout of the space of ways in which it could remove my leg and have me scream and express horror or agony at my leg’s loss without my noticing.
I really do think that at a best guess, according to my extrapolated values, human suffering outweights that of the rest of the biosphere, most likely by a large ratio (best guess might be between one and two orders of magnitude). Much more importantly, at a best guess, human ‘unachieved but reasonably achievable without superintelligence flourishing’ outweighs the animal analog by many orders of magnitude, and if the two can be put on a common scale I wouldn’t be surprised if the former is a MUCH bigger problem than suffering. I also wouldn’t be shocked if the majority of total suffering in basically Earth-like worlds (and thus the largest source of expected suffering given our epistemic state) comes from something utterly stupid, such as people happening to take up the factory farming of some species which happens, for no particularly good reason, to be freakishly capable of suffering. Sensitivity to long tails tends to be a dominant feature of serious expected utility calculus given my current set of heuristics. The modal dis-value I might put on a pig living its life in a factory farm is under half the median which is under half the mean.