How do we know that there is a disagreement? Well, if the issue is significant enough, both sides would feel justified in starting a war over it. This is true even if they agree on Alice_Should and Bob_Should and so on for everyone on Earth. That seems like a pretty real disagreement to me.
And yet we can readily imagine amoral individuals who explicitly agree on all the facts, but go to war anyway in order to seize resources for themselves/paperclips. (Sorry, Clippy.)
You want to make use of the fact that it feels to us like a disagreement when people go to war and cite moral reasons for doing so. But two objections occur to me:
A. Why trust this feeling at all? Our brains evolved in ways that favored reproduction, rather than finding the truth as such. War could harm the chance of reproductive success, so if we can convince the other side to do what we want using words alone then we might expect to feel an urge to argue. This could lead to an instinctive belief in “disagreement” where none exists, if the belief helps us confuse the enemy. I don’t know if this quite makes sense on the assumption that different humans have different “_Should” functions, but it means your argument does not seem self-evidently true.
B. If we do trust this feeling under normal circumstances, why assume that humans have different _Should functions? Why not say that our brains expect disagreement because humans do in fact tend to work from one genetically coded function, and will not go to war ‘for moral reasons’ unless at least one side gets this complex ‘calculation’ wrong? We certainly don’t need to assume anything further in order to explain the phenomenon you cite if it seems suitable for dealing with humans. And if morality has a strong genetic component, then we’d expect either a state of affairs that I associate with Eliezer’s position (complex machinery shared by nearly every member of the species), or a variation of this where the function chiefly justifies seeking outcomes that once favored reproductive success. The latter would not appear to help your position. It would mean that fully self-aware fighters could agree on all the facts, could know they agree, and could therefore shake hands before trying to kill each other in accordance with their warrior dharma.
Clippy and Snippy the scissors-maximizer only agree on all the facts if you exclude moral facts. But this is what we are arguing about—if there are moral facts.
A: So would you support or oppose a war on clippy? What about containing psychopaths and other (genetically?) abnormal humans?
Why do you need to fight them if you agree with them?
Why do you need to fight them if you agree with them?
Because they’re dangerous. And I don’t think Clippy disagrees intellectually on the morality of turning humans into paperclips; it just disagrees verbally. It thinks some of us will hesitate a bit if it claims to use our concept of morality and to find that paperclipping is supremely right.
Meanwhile, many psychopaths are quite clear and explicit that their ways are immoral. They know and don’t care or even pretend to care.
Dangerous implies a threat. Conflicting goals aren’t sufficient to establish a threat substantial enough to need fighting or even shunning; that additionally requires the power to carry those goals to dangerous places.
Clippy’s not dangerous in that sense. It’d happily turn my mass into paperclips given the means and absent countervailing influences, but a non-foomed clippy with a basic understanding of human society meets neither criterion. With that in mind, and as it doesn’t appear to have the resources needed to foom (or to establish some kind of sub-foom paperclip regime) on its own initiative, our caution need only extend to denying it those resources. I even suspect I might be capable of liking it, provided some willing suspension of disbelief.
As best I can tell this isn’t like dealing with a psychopath, a person with human social aggressions but without the ability to form empathetic models or to make long-term game-theoretic decisions and commitments based on them. It’s more like dealing with an extreme ideologue: you don’t want to hand such a person any substantial power over your future, but you don’t often need to fight them, and tit-for-tat bargaining can be quite safe if you understand their motivations.
Ah. Generally I read “Clippy” as referring to User:Clippy or something like it, who’s usually portrayed as having human-parity intelligence and human-parity or fewer resources; I don’t think I’ve ever seen the word used unqualified to describe the monster raving superhuman paperclip maximizer of the original thought experiment.
...and here I find myself choosing my words carefully in order to avoid offending a fictional AI with a cognitive architecture revolving around stationary fasteners. Strange days indeed.
That seems overly complicated when you could just say that you disagree.
Meanwhile, many psychopaths are quite clear and explicit that their ways are immoral.
So clearly the definition of morality they use is not connected to shouldness? I guess that’s their prerogative to define morality that way. But they ALSO have different views on shouldness than us, otherwise they would act in the same manner.
Are you disagreeing that Clippy and Snippy are dangerous? If not, accepting this statement adds no complexity to my view as compared to yours.
As for shouldness, many people don’t make a distinction between “rationally should” and “morally should”. And why should they; after all, for most there may be little divergence between the two. But the distinction is viable, in principle. And psychopaths, and those who have to deal with them, are usually well aware of it.
And yet we can readily imagine amoral individuals who explicitly agree on all the facts, but go to war anyway in order to seize resources for themselves/paperclips. (Sorry, Clippy.)
You want to make use of the fact that it feels to us like a disagreement when people go to war and cite moral reasons for doing so. But two objections occur to me:
A. Why trust this feeling at all? Our brains evolved in ways that favored reproduction, rather than finding the truth as such. War could harm the chance of reproductive success, so if we can convince the other side to do what we want using words alone then we might expect to feel an urge to argue. This could lead to an instinctive belief in “disagreement” where none exists, if the belief helps us confuse the enemy. I don’t know if this quite makes sense on the assumption that different humans have different “_Should” functions, but it means your argument does not seem self-evidently true.
B. If we do trust this feeling under normal circumstances, why assume that humans have different _Should functions? Why not say that our brains expect disagreement because humans do in fact tend to work from one genetically coded function, and will not go to war ‘for moral reasons’ unless at least one side gets this complex ‘calculation’ wrong? We certainly don’t need to assume anything further in order to explain the phenomenon you cite if it seems suitable for dealing with humans. And if morality has a strong genetic component, then we’d expect either a state of affairs that I associate with Eliezer’s position (complex machinery shared by nearly every member of the species), or a variation of this where the function chiefly justifies seeking outcomes that once favored reproductive success. The latter would not appear to help your position. It would mean that fully self-aware fighters could agree on all the facts, could know they agree, and could therefore shake hands before trying to kill each other in accordance with their warrior dharma.
Clippy and Snippy the scissors-maximizer only agree on all the facts if you exclude moral facts. But this is what we are arguing about—if there are moral facts.
A: So would you support or oppose a war on clippy? What about containing psychopaths and other (genetically?) abnormal humans?
Why do you need to fight them if you agree with them?
B. Irrelevant with my examples.
Because they’re dangerous. And I don’t think Clippy disagrees intellectually on the morality of turning humans into paperclips; it just disagrees verbally. It thinks some of us will hesitate a bit if it claims to use our concept of morality and to find that paperclipping is supremely right.
Meanwhile, many psychopaths are quite clear and explicit that their ways are immoral. They know and don’t care or even pretend to care.
Dangerous implies a threat. Conflicting goals aren’t sufficient to establish a threat substantial enough to need fighting or even shunning; that additionally requires the power to carry those goals to dangerous places.
Clippy’s not dangerous in that sense. It’d happily turn my mass into paperclips given the means and absent countervailing influences, but a non-foomed clippy with a basic understanding of human society meets neither criterion. With that in mind, and as it doesn’t appear to have the resources needed to foom (or to establish some kind of sub-foom paperclip regime) on its own initiative, our caution need only extend to denying it those resources. I even suspect I might be capable of liking it, provided some willing suspension of disbelief.
As best I can tell this isn’t like dealing with a psychopath, a person with human social aggressions but without the ability to form empathetic models or to make long-term game-theoretic decisions and commitments based on them. It’s more like dealing with an extreme ideologue: you don’t want to hand such a person any substantial power over your future, but you don’t often need to fight them, and tit-for-tat bargaining can be quite safe if you understand their motivations.
I thought we were talking about a foomed/fooming Clippy.
Ah. Generally I read “Clippy” as referring to User:Clippy or something like it, who’s usually portrayed as having human-parity intelligence and human-parity or fewer resources; I don’t think I’ve ever seen the word used unqualified to describe the monster raving superhuman paperclip maximizer of the original thought experiment.
...and here I find myself choosing my words carefully in order to avoid offending a fictional AI with a cognitive architecture revolving around stationary fasteners. Strange days indeed.
That seems overly complicated when you could just say that you disagree.
So clearly the definition of morality they use is not connected to shouldness? I guess that’s their prerogative to define morality that way. But they ALSO have different views on shouldness than us, otherwise they would act in the same manner.
Are you disagreeing that Clippy and Snippy are dangerous? If not, accepting this statement adds no complexity to my view as compared to yours.
As for shouldness, many people don’t make a distinction between “rationally should” and “morally should”. And why should they; after all, for most there may be little divergence between the two. But the distinction is viable, in principle. And psychopaths, and those who have to deal with them, are usually well aware of it.
I’m not sure what I mean by complicated.
Exactly, I’m talking about the concept “should’, not the word.