For now this is just a text dump for relating to a conversation I had, that I retracted, not because I found them so lacking but because that particular irrationality game thread turned out to have been made by a likely troll. Expect changes in the next few days. Here is a link to the original conversation.
We have not been experiencing moral progress in the past 250 years. Moral change? Sure. I’d also be ok with calling it value drift. I talked about this previously in some detail here and here. I hope some of you have read that material before. It is also neat if you read the meta-ethics sequence and particularly this post.
Against the Better Angels of our Nature counterargument
The trend to moral progress has been one of less accepting of violence, less acceptance of nonconsensual interaction, less victim blaming, and less standing by while terrible things happen to others (or at least looking indignant at past instances of this).
This leads to a falsifiable prediction. In the next one to four centuries, vegetarianism will increase to a majority, jails will be seen as unnecessarily, brutally, unjustifiably harsh, “the poor” will be less of an Acceptable Target (c.f. delusions that they are “just lazy” and so on), and a condemnation of the present generation for being so terrible at donating in general and at donating to the right causes. If all of those things happen, moral progress will have been flat-out confirmed.
I don’t think I should be a vegetarian. Thus at best I feel uneasy that people in four centuries thinking vegetarianism should be compulsory and at worst I’ll be dismayed them spending time on activities related to that instead of things I value. If I thought that was great I’d already be vegetarian, duh.
Also I think I like some violence to be ok. Completely non-violent minds would be rather inhuman, and violence has some neat properties if viewed from the perspective of fun theory. In any case I strongly suspect the general non-violence trend (document by Pinker) in the past few thousand years was due to biological changes in humans because of our self-domestication. Your point on consent is questionable. Victim blaming as well since especially in the 20th century I would think all we saw was one set of scapegoats being swapped for another one.
This leads me to suspect Homer’s FAI is probably different from my own FAI, is different from the FAI of 2400 AD values. If FAI2400 gets to play with the universe around forever, instead of FAI2012 I’d be rather pissed. Just because you see a trend line in moral change doesn’t mean there is any reason to outsource your future value edits to. Isn’t this the classical mistake of confusing is for should?
But if it was as you say then all our worries about CEV and FAI would be silly, since our society apparently already automagically is something very similar to what we want, we just need to figure out how to design it so that we can include emulated human minds into it while it continues working its thing.
Yay positive singularity problem solved!
Is moral progress a coherent concept? What is moral progress?
Short anser: Yes I tentatively think it is. I need to work to make my answer to the second question more explicit, if not into an independent essay, I’ll be citing some thought done by Eliezer Yudkwoksy on CEV and will also be relying on James_G’s concept of the eminent self.
Do you believe that there is no non-arbitrary way to define “moral progress”, or you think that “moral progress” is a coherent concept, just we haven’t experienced it?
I think moral progress is a coherent concept, I’m inclined to argue no human society so far has experience it, though obviously I can’t rule out some outliers that did do so in certain time periods since this is such a huge set. we have so little data and there seems to be great variance in the kinds of values we seen in them.
“Moral progress” simply describes moral change or value drift in the speaker’s preferred direction. Very confident (~95%).
I don’t use it that way. I like lots of moral changes in the past 250 years but feel the process behind it isn’t something I want to outsource morality to. Just like I like having opposable thumbs but feel uncomfortable letting evolution shape humans any further. We should do that ourselves so it doesn’t grind down our complex values.
There are lots of people running around who think society in 1990 is somehow morally superior to society in 1890 on some metric of rightness beyond the similarity of their values to our own. This is the difference between someone being on the “wrong side of history” being merely a mistake in reasoning they should get over as soon as possible and it being a tragedy for them. A tragedy that perhaps kept repeating for every human society and individual in existence for nearly all of history.
This also suggests different strategies are appropriate for dealing with future moral change. I think we should be very cautious since I’m sure we don’t understand the process. Modern Western civilization doesn’t have narrative of “over time values became more and more like our own”, but “over time morality got better and better and this gives our society meaning!”. Its the difference between seeing “God guiding evolution” and confronting the full horror of Azathoth.
Do you think any human society ever experienced moral progress?
Hard to say, history is blurry, we do know the past 300 years well enough that I’m ok with this level certainty.
I’m far from comfortable saying that there was no moral progress in say some Medieval European societies. Not perhaps from our perspective, but from a sort of CEV-of-700 AD values looking at 1100 AD, who knows? I don’t know enough to have a reasonable estimate.
There was also useful progress in philosophy made before the “Enlightenment” that sometimes captured previous values and preferences and fixed them up. But again nearly any society for which that is true there was also lots of harmful philosophy that mutated values in responses to various pressures.
If you can’t produce evidence that moral progress ever happened and believe that it definitely hasn’t happened in the recent past, why do you think that moral progress is a coherent concept?
I didn’t say I had great confidence in moral progress being a coherent concept. But it seems plausible to me that acquiring more true beliefs and thinking about them clearly might lead to discovering some values are incoherent or unreachable and thus stop pursuing them.
Feedback at any stage is welcomed. Expect Frequent Edits
Note: I’ve had very good experiences with such public drafts so far and I recommend them to others.
The trend to moral progress has been one of less accepting of violence, [...] and less standing by while terrible things happen to others (or at least looking indignant at past instances of this).
I find this juxtaposition unintentionally hilarious. The reason modern society does so much looking indignant at past instances of terrible things happening to others, rather than stopping them while they are happening, is because the only way to stop them is to use violence oneself, which modern society is especially uncomfortable with.
In general this is the problem with attempting to blindly extrapolate present trends past the point where they come into conflict with other present trends.
I know it’s a first draft, but “Better Angels of Our Nature”, much as I love the idea of being able to geometrize moral stature.
In The Abolition of Man, C.S. Lewis talks about utopian dreams means hoping that a small proportion of the human race will tyrannize over the whole future.
CEV is problematic if part of my idea of knowing more includes the idea of learning from experience. I don’t have unlimited trust in extrapolation.
I don’t know what you mean by violence having some good traits. I can imagine an improved society which permits low-level interpersonal violence with a strong norm that equivalent retaliation should be possible. I don’t think there’s anything gained by big wars, but I could be wrong.
“The wrong side of history” is a way of cheating in an argument. We don’t know the future, and “the wrong side of history” just implies a belief that your side will continue to win. I’m willing to bet that “the wrong side of history” is used by people who aren’t comfortable with making moral pronouncements.
I know it’s a first draft, but “Better Angels of Our Nature”, much as I love the idea of being able to geometrize moral stature.
More a text dump than anything else. Thank you for pointing out the typo thought.
I don’t know what you mean by violence having some good traits.
Violence can be fun. I’d argue this is particularly true of “safe violence”, that doesn’t result in death or permanent injury. Otherwise we wouldn’t include it so much in every aspect of entertainment, particularly interactive entertainment. We also have people who enjoy violence in their sexual lives.
I can imagine an improved society which permits low-level interpersonal violence with a strong norm that equivalent retaliation should be possible. I don’t think there’s anything gained by big wars, but I could be wrong.
I don’t know what you mean by violence having some good traits.
Violence can be fun. I’d argue this is particularly true of “safe violence”, that doesn’t result in death or permanent injury. Otherwise we wouldn’t include it so much in every aspect of entertainment, particularly interactive entertainment. We also have people who enjoy violence in their sexual lives.
I suspect the two of you are using “violence” with slightly different meanings.
But it seems plausible to me that acquiring more true beliefs and thinking about them clearly might lead to discovering some values are incoherent or unreachable and thus stop pursuing them.
Some people might reasonably, and coherently, value valuing incoherent or unreachable values (in, so to say, compartmentalized good faith—that is, you might know that an algorithm is incoherent, prone to dutch-booking, etc, but it still feels just fine from the inside) - just as some people think that belief in belief might have worth of its own, are consciously hypocritical, etc. Therefore, I’m against such one-level optimizing-away of already held values; if you see that some specific value is total mess, you might instead just compartmentalize a little, etc.
(I believe I’ve already mentioned the above to you at some point.)
BTW, a classic example of people valuing an unreachable value: “Love thy enemies”. (Once I had an awesome experience meditating on it.)
I would benefit from seeing a clear distinction made in these discussions between two different questions about moral progress:
1) Have moral intentions improved? Does a typical person educated in an advanced society have better moral intentions (never mind outcomes) than a typical person educated in a backward society?
2) Have moral outcomes improved? Are there in aggregate more moral events and less immoral events (never mind intentions) now than previously?
Of course there is no consensus on what “moral” means in either of these questions. I think Pinkerian “amount of violence” is a pretty good proxy for 2), but not for 1).
Your counterargument to Pinker is pretty central to this thing, but as it stands it seems to boil down to a not yet very convincing “I don’t care for vegetarianism. Violence is occasionally entertaining.” This part should be the one that makes the reader go, hm, maybe there’s a point there, but it’s currently doing nothing to make me stop classifying factory farming food industry and a preoccupation with violence as problems instead of things to cherish.
Moving on to
This leads me to suspect Homer’s FAI is probably different from my own FAI, is different from the FAI of 2400 AD values. If FAI2400 gets to play with the universe around forever, instead of FAI2012 I’d be rather pissed.
this is also confusing. You’re basically restating the exact problem CEV is for, without mentioning that CEV is for this problem. This also really only makes sense if you antropomorphize FAI into basically an equivalent of the cultural norms of the era. There are way too many unknown unknowns in how the basic cultural backdrop would come out in the end when operated on by an AI as compared to when operated on by collective human minds for the equating to outcomes of a culture run by humans to make much sense. I’m basically assuming that the hopefully better understanding of just how intelligence works at 2400 would dominate over whatever the human cultural norms are like for how FAI2400 as opposed to FAI2012 would come out.
If I wanted to attack the thesis that we’re experiencing moral progress due to cultural evolution, I’d go for looking at how we currently have unprecedented energy resources at our disposal, and can afford a great deal more social signaling of every sort than in pretty much any other point in history, and how the past 300 years we’ve been on a rising gradient towards the current level of resource use.
From historical perspective, I’d be interested if we can quantify any sort of differences in moral progress separate from material progress in the various geographically and culturally separate historical large civilizations, and what we can make of the collapse of the Roman Empire into the Early Middle Ages.
The article might also try to say something about what it could mean for a society to be moral, independent of how technologically advanced and resource-rich the society is.
Related to: List of public drafts on LessWrong
Public Draft On Moral progress—Text dump
For now this is just a text dump for relating to a conversation I had, that I retracted, not because I found them so lacking but because that particular irrationality game thread turned out to have been made by a likely troll. Expect changes in the next few days. Here is a link to the original conversation.
We have not been experiencing moral progress in the past 250 years. Moral change? Sure. I’d also be ok with calling it value drift. I talked about this previously in some detail here and here. I hope some of you have read that material before. It is also neat if you read the meta-ethics sequence and particularly this post.
Against the Better Angels of our Nature counterargument
Named after this excellent long book which you guys really should read. Actually someone should do a review of the book. Note to self: Do review in one year if no one else beats you to it.
I don’t think I should be a vegetarian. Thus at best I feel uneasy that people in four centuries thinking vegetarianism should be compulsory and at worst I’ll be dismayed them spending time on activities related to that instead of things I value. If I thought that was great I’d already be vegetarian, duh.
Also I think I like some violence to be ok. Completely non-violent minds would be rather inhuman, and violence has some neat properties if viewed from the perspective of fun theory. In any case I strongly suspect the general non-violence trend (document by Pinker) in the past few thousand years was due to biological changes in humans because of our self-domestication. Your point on consent is questionable. Victim blaming as well since especially in the 20th century I would think all we saw was one set of scapegoats being swapped for another one.
This leads me to suspect Homer’s FAI is probably different from my own FAI, is different from the FAI of 2400 AD values. If FAI2400 gets to play with the universe around forever, instead of FAI2012 I’d be rather pissed. Just because you see a trend line in moral change doesn’t mean there is any reason to outsource your future value edits to. Isn’t this the classical mistake of confusing is for should?
But if it was as you say then all our worries about CEV and FAI would be silly, since our society apparently already automagically is something very similar to what we want, we just need to figure out how to design it so that we can include emulated human minds into it while it continues working its thing.
Yay positive singularity problem solved!
Is moral progress a coherent concept? What is moral progress?
Short anser: Yes I tentatively think it is. I need to work to make my answer to the second question more explicit, if not into an independent essay, I’ll be citing some thought done by Eliezer Yudkwoksy on CEV and will also be relying on James_G’s concept of the eminent self.
I think moral progress is a coherent concept, I’m inclined to argue no human society so far has experience it, though obviously I can’t rule out some outliers that did do so in certain time periods since this is such a huge set. we have so little data and there seems to be great variance in the kinds of values we seen in them.
I don’t use it that way. I like lots of moral changes in the past 250 years but feel the process behind it isn’t something I want to outsource morality to. Just like I like having opposable thumbs but feel uncomfortable letting evolution shape humans any further. We should do that ourselves so it doesn’t grind down our complex values.
There are lots of people running around who think society in 1990 is somehow morally superior to society in 1890 on some metric of rightness beyond the similarity of their values to our own. This is the difference between someone being on the “wrong side of history” being merely a mistake in reasoning they should get over as soon as possible and it being a tragedy for them. A tragedy that perhaps kept repeating for every human society and individual in existence for nearly all of history.
This also suggests different strategies are appropriate for dealing with future moral change. I think we should be very cautious since I’m sure we don’t understand the process. Modern Western civilization doesn’t have narrative of “over time values became more and more like our own”, but “over time morality got better and better and this gives our society meaning!”. Its the difference between seeing “God guiding evolution” and confronting the full horror of Azathoth.
Hard to say, history is blurry, we do know the past 300 years well enough that I’m ok with this level certainty.
I’m far from comfortable saying that there was no moral progress in say some Medieval European societies. Not perhaps from our perspective, but from a sort of CEV-of-700 AD values looking at 1100 AD, who knows? I don’t know enough to have a reasonable estimate.
There was also useful progress in philosophy made before the “Enlightenment” that sometimes captured previous values and preferences and fixed them up. But again nearly any society for which that is true there was also lots of harmful philosophy that mutated values in responses to various pressures.
I didn’t say I had great confidence in moral progress being a coherent concept. But it seems plausible to me that acquiring more true beliefs and thinking about them clearly might lead to discovering some values are incoherent or unreachable and thus stop pursuing them.
Feedback at any stage is welcomed. Expect Frequent Edits
Note: I’ve had very good experiences with such public drafts so far and I recommend them to others.
I find this juxtaposition unintentionally hilarious. The reason modern society does so much looking indignant at past instances of terrible things happening to others, rather than stopping them while they are happening, is because the only way to stop them is to use violence oneself, which modern society is especially uncomfortable with.
In general this is the problem with attempting to blindly extrapolate present trends past the point where they come into conflict with other present trends.
I know it’s a first draft, but “Better Angels of Our Nature”, much as I love the idea of being able to geometrize moral stature.
In The Abolition of Man, C.S. Lewis talks about utopian dreams means hoping that a small proportion of the human race will tyrannize over the whole future.
CEV is problematic if part of my idea of knowing more includes the idea of learning from experience. I don’t have unlimited trust in extrapolation.
I don’t know what you mean by violence having some good traits. I can imagine an improved society which permits low-level interpersonal violence with a strong norm that equivalent retaliation should be possible. I don’t think there’s anything gained by big wars, but I could be wrong.
“The wrong side of history” is a way of cheating in an argument. We don’t know the future, and “the wrong side of history” just implies a belief that your side will continue to win. I’m willing to bet that “the wrong side of history” is used by people who aren’t comfortable with making moral pronouncements.
More a text dump than anything else. Thank you for pointing out the typo thought.
Violence can be fun. I’d argue this is particularly true of “safe violence”, that doesn’t result in death or permanent injury. Otherwise we wouldn’t include it so much in every aspect of entertainment, particularly interactive entertainment. We also have people who enjoy violence in their sexual lives.
Yes this is what I was going for.
I suspect the two of you are using “violence” with slightly different meanings.
Some people might reasonably, and coherently, value valuing incoherent or unreachable values (in, so to say, compartmentalized good faith—that is, you might know that an algorithm is incoherent, prone to dutch-booking, etc, but it still feels just fine from the inside) - just as some people think that belief in belief might have worth of its own, are consciously hypocritical, etc.
Therefore, I’m against such one-level optimizing-away of already held values; if you see that some specific value is total mess, you might instead just compartmentalize a little, etc.
(I believe I’ve already mentioned the above to you at some point.)
BTW, a classic example of people valuing an unreachable value: “Love thy enemies”. (Once I had an awesome experience meditating on it.)
I would benefit from seeing a clear distinction made in these discussions between two different questions about moral progress:
1) Have moral intentions improved? Does a typical person educated in an advanced society have better moral intentions (never mind outcomes) than a typical person educated in a backward society?
2) Have moral outcomes improved? Are there in aggregate more moral events and less immoral events (never mind intentions) now than previously?
Of course there is no consensus on what “moral” means in either of these questions. I think Pinkerian “amount of violence” is a pretty good proxy for 2), but not for 1).
Your counterargument to Pinker is pretty central to this thing, but as it stands it seems to boil down to a not yet very convincing “I don’t care for vegetarianism. Violence is occasionally entertaining.” This part should be the one that makes the reader go, hm, maybe there’s a point there, but it’s currently doing nothing to make me stop classifying factory farming food industry and a preoccupation with violence as problems instead of things to cherish.
Moving on to
this is also confusing. You’re basically restating the exact problem CEV is for, without mentioning that CEV is for this problem. This also really only makes sense if you antropomorphize FAI into basically an equivalent of the cultural norms of the era. There are way too many unknown unknowns in how the basic cultural backdrop would come out in the end when operated on by an AI as compared to when operated on by collective human minds for the equating to outcomes of a culture run by humans to make much sense. I’m basically assuming that the hopefully better understanding of just how intelligence works at 2400 would dominate over whatever the human cultural norms are like for how FAI2400 as opposed to FAI2012 would come out.
If I wanted to attack the thesis that we’re experiencing moral progress due to cultural evolution, I’d go for looking at how we currently have unprecedented energy resources at our disposal, and can afford a great deal more social signaling of every sort than in pretty much any other point in history, and how the past 300 years we’ve been on a rising gradient towards the current level of resource use.
From historical perspective, I’d be interested if we can quantify any sort of differences in moral progress separate from material progress in the various geographically and culturally separate historical large civilizations, and what we can make of the collapse of the Roman Empire into the Early Middle Ages.
The article might also try to say something about what it could mean for a society to be moral, independent of how technologically advanced and resource-rich the society is.
You bastard.
EDIT: That’s a joke, in case it’s not clear.
This is plain false because my parents are married. However this isn’t usually how we do moral arguments around here, are you new to the site?
That was a joke.
Went right over my head, sorry. :)
You probably weren’t the only one, its famously hard to convey tone using text.
http://en.wikipedia.org/wiki/Poe%27s_law