What other, perhaps slightly more complex or less obvious, crucial considerations are we still missing?
I agree this is very important. I’ve argued that if we appropriately price in missing crucial considerations,[1] we should consider ourselves clueless about AI risk interventions (here and here).
Thanks, I’ve seen/skimmed your sequence. I think I agree directionally but not fully with your conclusions, but am unsure. My current thinking is that humanity clearly shouldn’t be attempting an AI transition now, and stopping AI development has the least problems with unawareness (it involves the least radical changes and therefore is easiest to predict / steer, is least likely to have some unforeseen strategic complications), and then once that’s achieved, we should carefully and patiently try to figure out all the crucial considerations until it looks like we’ve finally found all of the most important ones, and only then attempt an AI transition.
I agree this is very important. I’ve argued that if we appropriately price in missing crucial considerations,[1] we should consider ourselves clueless about AI risk interventions (here and here).
Also relatively prosaic causal pathways we haven’t thought of in detail, not just high-level “considerations” per se.
Thanks, I’ve seen/skimmed your sequence. I think I agree directionally but not fully with your conclusions, but am unsure. My current thinking is that humanity clearly shouldn’t be attempting an AI transition now, and stopping AI development has the least problems with unawareness (it involves the least radical changes and therefore is easiest to predict / steer, is least likely to have some unforeseen strategic complications), and then once that’s achieved, we should carefully and patiently try to figure out all the crucial considerations until it looks like we’ve finally found all of the most important ones, and only then attempt an AI transition.