I remember that you posted some variant of this idea as a short form or a post some time ago. I can see that you feel the idea is very important, and I want to respond to it on its terms. My quick answer is that even under the “same” morals, people can undertake quite destructive actions to others, because most choices are made under a combination of perceived values (moral beliefs) and perceived circumstances (factual beliefs). Longer answer follows:
C. S. Lewis once tried to create a rough taxonomy of worldwide moral standards, showing that ideas such as the golden rule (do unto others what you would have others do unto you) and variants like the inverse golden rule (do not do unto others what you would not have others do unto you) were surprisingly popular across cultures. This was part of a broader project which is actually quite relevant to discussions of transhumanism. He was arguing that what we would call eugenics and transformative technology would annihilate “moral progress”, but we can set aside the transhumanist argument for now and just focus on the shared principles—things like “family is good” or “killing is bad”.
First of all, it should be a bad sign for your plan that such common principles can be identified at all, since it suggests that people might already have similar morals but still come to different conclusions about what is to be done. Second, it becomes quickly clear that some shared moral principles might lead to quite strong conflicts: I’m thinking about morals like “me/my family/my ethnic group is superior and should come first/have the priority in situations of scarcity and danger”. If four different nations are led by governments with that same belief, and the pool of resources becomes limited, fighting will almost certainly break out. This is true even if cooperation would prevent the destruction of valuable limited resources, increasing total resources available to the nations as a whole!
From a broader perspective, what I see in the steelman of your idea is something like “if we get people to discuss, they will quickly realise that their ideas about what is moral are insufficient, and a better set of morals will emerge”. So they might converge on something like “we are all one global family, and we should all stick together even when we disagree because conflict is terrible”. However, this is where the circumstances part of the choice comes in. I can agree in principle that unity is good and that life is sacred. However, if I believe that someone else does not share those ethics, and is (for example) about to kill me or rob me, I might act in self defence. Most of us would call that justified, even though it violates my stated values. Today many leaders make lip service about respecting human rights and international norms… but it’s just that those evil evildoers are so evil that we need to do something serious to stop them. My values are pure, but circumstances forced my hand. And so on, and so forth.
Now, if you can truly convince everyone that everyone else is also a reasonable and nice human being, then maybe some progress can be made, but this is a very, very difficult thing, especially when there are centuries of conflict to deal with and legacies of complex and multilayered traumas in many parts of the world. So all in all I think this proposal is very unlikely to succeed. I hope this makes sense.
Yea, it’d be a bonus to convince/inform folks that, if this works out, other people won’t be evil, & if we don’t do that then some folks still might do bad things bc they think other folks are bad,
But as long as one doesn’t see a way this idea makes things actively worse, It’s still a good idea!
Thanks for pointing that out tho. Will add that (“that” being “Making sure folks understand that, if this idea is implemented, other folks won’t be as evil, and you can stop being as bad to them”) to the idea.
I remember that you posted some variant of this idea as a short form or a post some time ago. I can see that you feel the idea is very important, and I want to respond to it on its terms. My quick answer is that even under the “same” morals, people can undertake quite destructive actions to others, because most choices are made under a combination of perceived values (moral beliefs) and perceived circumstances (factual beliefs). Longer answer follows:
C. S. Lewis once tried to create a rough taxonomy of worldwide moral standards, showing that ideas such as the golden rule (do unto others what you would have others do unto you) and variants like the inverse golden rule (do not do unto others what you would not have others do unto you) were surprisingly popular across cultures. This was part of a broader project which is actually quite relevant to discussions of transhumanism. He was arguing that what we would call eugenics and transformative technology would annihilate “moral progress”, but we can set aside the transhumanist argument for now and just focus on the shared principles—things like “family is good” or “killing is bad”.
First of all, it should be a bad sign for your plan that such common principles can be identified at all, since it suggests that people might already have similar morals but still come to different conclusions about what is to be done. Second, it becomes quickly clear that some shared moral principles might lead to quite strong conflicts: I’m thinking about morals like “me/my family/my ethnic group is superior and should come first/have the priority in situations of scarcity and danger”. If four different nations are led by governments with that same belief, and the pool of resources becomes limited, fighting will almost certainly break out. This is true even if cooperation would prevent the destruction of valuable limited resources, increasing total resources available to the nations as a whole!
From a broader perspective, what I see in the steelman of your idea is something like “if we get people to discuss, they will quickly realise that their ideas about what is moral are insufficient, and a better set of morals will emerge”. So they might converge on something like “we are all one global family, and we should all stick together even when we disagree because conflict is terrible”. However, this is where the circumstances part of the choice comes in. I can agree in principle that unity is good and that life is sacred. However, if I believe that someone else does not share those ethics, and is (for example) about to kill me or rob me, I might act in self defence. Most of us would call that justified, even though it violates my stated values. Today many leaders make lip service about respecting human rights and international norms… but it’s just that those evil evildoers are so evil that we need to do something serious to stop them. My values are pure, but circumstances forced my hand. And so on, and so forth.
Now, if you can truly convince everyone that everyone else is also a reasonable and nice human being, then maybe some progress can be made, but this is a very, very difficult thing, especially when there are centuries of conflict to deal with and legacies of complex and multilayered traumas in many parts of the world. So all in all I think this proposal is very unlikely to succeed. I hope this makes sense.
Yea, it’d be a bonus to convince/inform folks that, if this works out, other people won’t be evil,
& if we don’t do that then some folks still might do bad things bc they think other folks are bad,
But as long as one doesn’t see a way this idea makes things actively worse, It’s still a good idea!
Thanks for pointing that out tho. Will add that (“that” being “Making sure folks understand that, if this idea is implemented, other folks won’t be as evil, and you can stop being as bad to them”) to the idea.
Thanks!