Nick, I’m afraid that a faction[1] of your moral parliament may have staged a (hopefully temporary) coup or takeover, because if all of the representatives were still in a cooperative mood it seems like you’d probably have inserted at least a few more sentences to frame it differently to mitigate potential risks. You have enough people around you who would presumably be happy to help you with this even if you “have no comparative advantage” in it. (Comparative advantage is supposed to be an argument for trade, not an excuse for ignoring risks/downsides to your other values!)
I agree with the concern generally, but I think we very much should not concede the point (to people with EPOCH-type beliefs, for instance) that AI accelerationism is an okay conclusion for people with person-affecting views (as you imply a bit in your endnote). For one thing, even on Bostrom’s analysis, pausing for multiple years makes sense under quite a broad class of assumptions (personally I think it’s clearly bad thinking to put only <15% on risk of AI ruin, and my own credence is >>50%). Secondly, as Jan Kulveit’s top-level comment here pointed out, more things matter on person-affecting views than crude welfare-utilitarian considerations (it also matters that some people want their children to grow up or for humanity to succeed in the long run even at some personal cost). Lastly, see the point in the last paragraph of my reply to habryka: Other civs in the multiverse matter also on person-affecting views, and it’s quite embarrassing and bad form if our civilization presses “go” on something that is 80% or 95% likely to get out of control and follow Moloch dynamics, when we could try to take more care and add a more-likely-to-be cooperative and decent citizen to the “cosmic host”.
Nick, I’m afraid that a faction[1] of your moral parliament may have staged a (hopefully temporary) coup or takeover, because if all of the representatives were still in a cooperative mood it seems like you’d probably have inserted at least a few more sentences to frame it differently to mitigate potential risks. You have enough people around you who would presumably be happy to help you with this even if you “have no comparative advantage” in it. (Comparative advantage is supposed to be an argument for trade, not an excuse for ignoring risks/downsides to your other values!)
perhaps a coalition of egoism, person-affecting altruism, and intellectual pursuit for its own sake
I agree with the concern generally, but I think we very much should not concede the point (to people with EPOCH-type beliefs, for instance) that AI accelerationism is an okay conclusion for people with person-affecting views (as you imply a bit in your endnote). For one thing, even on Bostrom’s analysis, pausing for multiple years makes sense under quite a broad class of assumptions (personally I think it’s clearly bad thinking to put only <15% on risk of AI ruin, and my own credence is >>50%). Secondly, as Jan Kulveit’s top-level comment here pointed out, more things matter on person-affecting views than crude welfare-utilitarian considerations (it also matters that some people want their children to grow up or for humanity to succeed in the long run even at some personal cost). Lastly, see the point in the last paragraph of my reply to habryka: Other civs in the multiverse matter also on person-affecting views, and it’s quite embarrassing and bad form if our civilization presses “go” on something that is 80% or 95% likely to get out of control and follow Moloch dynamics, when we could try to take more care and add a more-likely-to-be cooperative and decent citizen to the “cosmic host”.