Highly related classic LW post: The Rocket Alignment Problem
I agree that feedback control will probably be very important (and probably in fact ~necessary) for successful alignment of superintelligence.
I don’t think we have a great plan for how to achieve this yet, or even a good plan.
Highly related classic LW post: The Rocket Alignment Problem
I agree that feedback control will probably be very important (and probably in fact ~necessary) for successful alignment of superintelligence.
I don’t think we have a great plan for how to achieve this yet, or even a good plan.