My personal suspicion is that an AI being indifferent between a large class of outcomes matters little; it’s still going to absolutely ensure that it hits the pareto frontier of its competing preferences.
Hitting the pareto frontier looks very different from hitting the optimum of a single objective.
I don’t think those arguments that rely on EU maximisation translate.
My personal suspicion is that an AI being indifferent between a large class of outcomes matters little; it’s still going to absolutely ensure that it hits the pareto frontier of its competing preferences.
Hitting the pareto frontier looks very different from hitting the optimum of a single objective.
I don’t think those arguments that rely on EU maximisation translate.