I’m not sure the ambiguity of “plausibility” is the problem with the conditional there. Suppose we replaced plausibility with “>10% chance as judged by an ideal reasoner knowing everything either of us know.” Then the conditional still couldn’t be true as a general principle, because there could be more than one cause area meeting that protosis, but the apodosis specifies uniqueness. Maybe insect suffering and AI suffering are both 15% likely to be millions of times greater in importance than anything else, but they can’t both be the only things worth working on.
OTOH a symmetric framing like
If an issue plausibly causes a million times more suffering than anything else in the world, then it’s plausibly the only thing worth working on
is much more, ah, plausible (although I might still be reluctant to embrace it.) On learning of the potential scope of insect suffering one should give it some pretty serious consideration.
(Agreed that replacing linear arguments with n-lemmas is in most cases an improvement.)
I’m not sure the ambiguity of “plausibility” is the problem with the conditional there. Suppose we replaced plausibility with “>10% chance as judged by an ideal reasoner knowing everything either of us know.” Then the conditional still couldn’t be true as a general principle, because there could be more than one cause area meeting that protosis, but the apodosis specifies uniqueness. Maybe insect suffering and AI suffering are both 15% likely to be millions of times greater in importance than anything else, but they can’t both be the only things worth working on.
OTOH a symmetric framing like
is much more, ah, plausible (although I might still be reluctant to embrace it.) On learning of the potential scope of insect suffering one should give it some pretty serious consideration.
(Agreed that replacing linear arguments with n-lemmas is in most cases an improvement.)