Some changes in morality come about because people notice that their previous ideas contained incorrect probability assessments. These changes can be considered moral progress.
Example: people find a logical inconsistency in their moral thinking, and correct for it.
Example: people notice that they have been assuming it necessary to be Homo sapiens or to be of a specific gender or color in order to have conscious experience, and that they don’t actually have any basis for such an assumption.
As long as our knowledge about the universe (including our own thought processes and the assumptions and mistakes we are making without realising them) continues to increase at a rapid pace, it is likely that every now and then we learn something such that it causes a correction in our moral thinking. Or at some point, we may still be learning rapidly, but it may have been a really long time that we last run across something that changed our ideas about morality (such a time hasn’t yet come during history).
When/if we have learned all that we possibly can (this includes thinking about stuff long enough and with sufficient quality to get all the insights that we can), there can be no more moral progress. If in such circumstances we find that we have identical ideas about morality when compared to our peers-in-knowledge (including knowledge about past life experiences of each other), and that these ideas don’t change over time, it proves there was convergence in the change of our ideas about morality (the question of how much of the change was a random walk could be further studied by running ancestor simulations).
On the other hand, we might find that we still can’t agree with each other about morality, not even when we are essentially omniscient. This would prove that much of the change in moral ideas is a random walk, and that possibly only a small fraction of the changes can be considered progress.
And, it does even currently seem to me, that it is logically possible to be essentially omniscient, and have really weird utility functions. But perhaps very few of us humans ever want to change ourselves into beings with very weird utility functions, and most of us will indeed converge to some specific ideas about morality.
(I guess I should confess that my thinking expressed here has been heavily influenced by Eliezer’s previous writings.)
Paul Gowder,
Yes, there are possible minds that do math/logic/deduction differently. Most of these logically possible minds perform even worse than humans in these aspects, and would die out.
In this universe, if one wishes to reach ones goals, one has to choose to (try to) do math/logic/deduction in the correct way; the way that delivers results. What works is determined by the laws of physics and logic that in our universe seem quite coherent and understandable (to a degree, at least).
There’s no reason to be skeptical about whether I actually have some goals/preferences. And since I assume that I have some preferences, I have a need to conform to the correct way of doing math/logic/deduction, which is determined by what seems a rather coherent physical universe.