re: no claim on which is bigger—when trying to guarantee less than epsilon probability of anything in a category happening, I actually agree with this. The reason I don’t agree now is that we’re really, really far away from “guarantee less than epsilon of anything in this category”. In other words, it only seems plausible to get to ignore relative magnitudes when the probability of anything-in-a-bad-category-happening is “so low our representation can’t usefully compare the probabilities of different bad things”. you’d only end up in a place like that after already having solved strong cosmopolitan alignment.
I also think there are things where you need to keep up pressure against the things in order to prevent collapse of a complex interaction process, eg if there’s any spot on your body which is not defended by the immune system it’s close to just as bad as if any other spot goes undefended—because of the way that an undefended region would allow pathogens to gain a foothold. it’s not literally equivalent, different infections can in fact be quite different, but I think a similar intuition is justified that there has to be some minimum level of defense against some kinds of bad things—this is more like an argument for the structure of comparisons, though, and doesn’t let you go full knightian.
The second argument is where I think we are with things like this. If we, humanity, don’t on net have at least x amount of pressure against all bad things, then we end up losing things we’re trying to save on the way—such as, eg, a cosmopolitan moral society to uplift sooner or later with cosmopolitan moral AI. bluntly, if a group is exterminated, you (probably*) can’t bring them back with superintelligence.
(*unless it turns out resurrection-with-math is possible, I’ve had arguments about this, basically it seems like a very bad bet to me, I’d much prefer people just not die.)
re: no claim on which is bigger—when trying to guarantee less than epsilon probability of anything in a category happening, I actually agree with this. The reason I don’t agree now is that we’re really, really far away from “guarantee less than epsilon of anything in this category”. In other words, it only seems plausible to get to ignore relative magnitudes when the probability of anything-in-a-bad-category-happening is “so low our representation can’t usefully compare the probabilities of different bad things”. you’d only end up in a place like that after already having solved strong cosmopolitan alignment.
I also think there are things where you need to keep up pressure against the things in order to prevent collapse of a complex interaction process, eg if there’s any spot on your body which is not defended by the immune system it’s close to just as bad as if any other spot goes undefended—because of the way that an undefended region would allow pathogens to gain a foothold. it’s not literally equivalent, different infections can in fact be quite different, but I think a similar intuition is justified that there has to be some minimum level of defense against some kinds of bad things—this is more like an argument for the structure of comparisons, though, and doesn’t let you go full knightian.
The second argument is where I think we are with things like this. If we, humanity, don’t on net have at least x amount of pressure against all bad things, then we end up losing things we’re trying to save on the way—such as, eg, a cosmopolitan moral society to uplift sooner or later with cosmopolitan moral AI. bluntly, if a group is exterminated, you (probably*) can’t bring them back with superintelligence.
(*unless it turns out resurrection-with-math is possible, I’ve had arguments about this, basically it seems like a very bad bet to me, I’d much prefer people just not die.)