I’m pretty sure that FAI+CEV is supposed to prevent exactly this scenario, in which an AI is allowed to come to its own, non-foreordained conclusions
FAI is supposed to come to whatever conclusions we would like it to come to (if we knew better etc.). It’s not supposed to specify the whole of human value ahead of time, it’s supposed to ensure that the FAI extrapolates the right stuff.
I’m pretty sure that FAI+CEV is supposed to prevent exactly this scenario, in which an AI is allowed to come to its own, non-foreordained conclusions
FAI is supposed to come to whatever conclusions we would like it to come to (if we knew better etc.). It’s not supposed to specify the whole of human value ahead of time, it’s supposed to ensure that the FAI extrapolates the right stuff.