Like I said in my first comment, the in practice difficulty of alignment is obviously connected to timeline and takeoff speed.
But you’re right that you’re talking about the intrinsic difficulty of alignment Vs takeoff speed in this post, not the in practice difficulty.
But those are also still correlated, for the reasons I gave—mainly that a discontinuity is an essential step in Eleizer style pessimism and fast takeoff views. I’m not sure how close this correlation is.
Do these views come apart in other possible worlds? I.e. could you believe in a discontinuity to a core of general intelligence but still think prosaic alignment can work?
I think that potentially you can—if you think that still enough capabilities in pre-HLMI AI (pre discontinuity) to help you do alignment research before dangerous HLMI shows up. But prosaic alignment seems to require more assumptions to be feasible assuming a discontinuity, like that the discontinuity doesn’t occur before all the important capabilities you need to do good alignment research.
I’m not sure I agree with the compatibility of discontinuity and prosaic alignment, though you make a reasonable case, but I do think there is compatibility between slower governance approaches and discontinuity, if it is far enough away.
Like I said in my first comment, the in practice difficulty of alignment is obviously connected to timeline and takeoff speed.
But you’re right that you’re talking about the intrinsic difficulty of alignment Vs takeoff speed in this post, not the in practice difficulty.
But those are also still correlated, for the reasons I gave—mainly that a discontinuity is an essential step in Eleizer style pessimism and fast takeoff views. I’m not sure how close this correlation is.
Do these views come apart in other possible worlds? I.e. could you believe in a discontinuity to a core of general intelligence but still think prosaic alignment can work?
I think that potentially you can—if you think that still enough capabilities in pre-HLMI AI (pre discontinuity) to help you do alignment research before dangerous HLMI shows up. But prosaic alignment seems to require more assumptions to be feasible assuming a discontinuity, like that the discontinuity doesn’t occur before all the important capabilities you need to do good alignment research.
I’m not sure I agree with the compatibility of discontinuity and prosaic alignment, though you make a reasonable case, but I do think there is compatibility between slower governance approaches and discontinuity, if it is far enough away.