Do you think that ethical questions could be more relevant for this than they are for alignment? For example, the difference between [getting rid of all humans] and [uploading all humans and making them artificially incredibly happy] isn’t important for AI alignment since they’re both cases of unaligned AI, but it might be important when the goal is to navigate between different modes of unaligned AI.
This sounds totally convincing to me.
Do you think that ethical questions could be more relevant for this than they are for alignment? For example, the difference between [getting rid of all humans] and [uploading all humans and making them artificially incredibly happy] isn’t important for AI alignment since they’re both cases of unaligned AI, but it might be important when the goal is to navigate between different modes of unaligned AI.