There’s no general reason to assume that a difference in goals implies a factual disagreement.
This is precisely what army1987 was trying to argue for when he brought up this example. Thus, attempting to use it in the analysis constitutes circular reasoning.
What? No, army1987 was trying to argue for “clippy knows what is moral but doesn’t care”. The fact that a difference in goals does not imply a factual disagreement simply shows army1987′s position to be consistent.
Also, um, why is it my responsibility to prove that you have no reason to assume something? You’re the one proposing that “X has different goals” implies “X is mistaken about morality”. How did you come to be so sure of this that you could automatically substitute “mistakenly believing that morality consists of optimizing paperclips” for “cares about paperclips”? Especially considering the counterevidence from the fact that there exist computable decision theories that can take an arbitrary utility function?
This is precisely what army1987 was trying to argue for when he brought up this example. Thus, attempting to use it in the analysis constitutes circular reasoning.
What? No, army1987 was trying to argue for “clippy knows what is moral but doesn’t care”. The fact that a difference in goals does not imply a factual disagreement simply shows army1987′s position to be consistent.
Also, um, why is it my responsibility to prove that you have no reason to assume something? You’re the one proposing that “X has different goals” implies “X is mistaken about morality”. How did you come to be so sure of this that you could automatically substitute “mistakenly believing that morality consists of optimizing paperclips” for “cares about paperclips”? Especially considering the counterevidence from the fact that there exist computable decision theories that can take an arbitrary utility function?