Thanks, I’ve seen/skimmed your sequence. I think I agree directionally but not fully with your conclusions, but am unsure. My current thinking is that humanity clearly shouldn’t be attempting an AI transition now, and stopping AI development has the least problems with unawareness (it involves the least radical changes and therefore is easiest to predict / steer, is least likely to have some unforeseen strategic complications), and then once that’s achieved, we should carefully and patiently try to figure out all the crucial considerations until it looks like we’ve finally found all of the most important ones, and only then attempt an AI transition.
Thanks, I’ve seen/skimmed your sequence. I think I agree directionally but not fully with your conclusions, but am unsure. My current thinking is that humanity clearly shouldn’t be attempting an AI transition now, and stopping AI development has the least problems with unawareness (it involves the least radical changes and therefore is easiest to predict / steer, is least likely to have some unforeseen strategic complications), and then once that’s achieved, we should carefully and patiently try to figure out all the crucial considerations until it looks like we’ve finally found all of the most important ones, and only then attempt an AI transition.