In particular, one of the reasons I think progress happens is that some things are pinned down by reality.
Indeed. To put it another way, what we call progress is mainly adaptation to changing material circumstances. So, there are uncountably many greatly diverging counterfactual moral endpoints, similarly to how there are uncountably many possible configurations of matter. Of course, some possibilities are more likely than others, in either realm.
I think I buy some of this—some ‘moral progress’ is increasing wealth allowing us to afford more luxuries. The optimal amount of self-expression is higher when it doesn’t cost as much in terms of starvation.
But I think I’m mostly interested in a different sort of progress—the kind where someone’s idea of what ‘the good’ is changes. [In particular, when thinking about the deep future, it’s more relevant to ask population-ethics-style questions of “which populations would we rather exist?” than individual behavior questions like “what behavior is righteous in this case?”.]
There’s a concept that sometimes gets used of ‘technological completion’—that is, you don’t know every logical fact, but you have come across all of the relevant designs. You’re no longer designing better chips or cars or space probes, because you’ve found all of the instances on the design frontier.
So by “moral endpoint” I mostly mean which options should be chosen at technological completion. It would be weird if there was one obvious choice of what to fill the universe with, even if it’s not weird that there’s one best transistor design (or whatever).
I think I buy some of this—some ‘moral progress’ is increasing wealth allowing us to afford more luxuries.
But I think I’m mostly interested in a different sort of progress—the kind where someone’s idea of what ‘the good’ is changes.
But these are largely the same. We are now rich enough to eschew slavery, so we can afford the “luxury” of banning it (and calling it utter evil). It’s plausible that we’ll soon be rich enough that eschewing animal meat will no longer be onerous, et cetera.
There’s a concept that sometimes gets used of ‘technological completion’—that is, you don’t know every logical fact, but you have come across all of the relevant designs.
Relevant to what? As long as our preference/motivation systems are subject to change (both through evolution and eventually through deliberate modification), I don’t see why we’ll ever run out of novel stuff to want. It’s a different story if we’ll get a singleton that decides to implement some sort of lock-in, in which case the talk about completion and end-points makes sense to me, but hopefully that could be avoided.
Indeed. To put it another way, what we call progress is mainly adaptation to changing material circumstances. So, there are uncountably many greatly diverging counterfactual moral endpoints, similarly to how there are uncountably many possible configurations of matter. Of course, some possibilities are more likely than others, in either realm.
I think I buy some of this—some ‘moral progress’ is increasing wealth allowing us to afford more luxuries. The optimal amount of self-expression is higher when it doesn’t cost as much in terms of starvation.
But I think I’m mostly interested in a different sort of progress—the kind where someone’s idea of what ‘the good’ is changes. [In particular, when thinking about the deep future, it’s more relevant to ask population-ethics-style questions of “which populations would we rather exist?” than individual behavior questions like “what behavior is righteous in this case?”.]
There’s a concept that sometimes gets used of ‘technological completion’—that is, you don’t know every logical fact, but you have come across all of the relevant designs. You’re no longer designing better chips or cars or space probes, because you’ve found all of the instances on the design frontier.
So by “moral endpoint” I mostly mean which options should be chosen at technological completion. It would be weird if there was one obvious choice of what to fill the universe with, even if it’s not weird that there’s one best transistor design (or whatever).
But these are largely the same. We are now rich enough to eschew slavery, so we can afford the “luxury” of banning it (and calling it utter evil). It’s plausible that we’ll soon be rich enough that eschewing animal meat will no longer be onerous, et cetera.
Relevant to what? As long as our preference/motivation systems are subject to change (both through evolution and eventually through deliberate modification), I don’t see why we’ll ever run out of novel stuff to want. It’s a different story if we’ll get a singleton that decides to implement some sort of lock-in, in which case the talk about completion and end-points makes sense to me, but hopefully that could be avoided.