The trouble is that there are multiple meanings of “moral values” here. There is the human instantiation, and the ideal decision agent instantiation. The ideal decision agent instantiation is used in 5. and a bit in 4. The human instantiation is used elsewhere.
Though usually these are pretty close and the approximation is useful, it can also run into trouble when you’re talking specifically about things humans do that ideal decision agents don’t do, and this is one of those things.
Specifically, 5. doesn’t necessarily work for human values, since we’re so inconsistent. People can go into isolation and just think and come out with different human values. How weird is that?!
I think you are right to call attention to the issue of drift.
Drift is bad in a simple value—at least in agents that consider temporal consistency to be a component of rationality. But drift can be acceptable in those ‘values’ which are valued precisely because they are conventions.
It is not necessarily bad for a teen-age subculture if their aesthetic values (on makeup, piercing, and hair) drift. As long as they don’t drift too fast so that nobody knows what to aim for.
It is not necessarily bad for a teen-age subculture if their aesthetic values (on makeup, piercing, and hair) drift. As long as they don’t drift too fast so that nobody knows what to aim for.
Those are instrumental values. Nobody cares very much if those change, because they were just a means to an end in the first place.
My position here is roughly that all ‘moral’ values are instrumental in this sense. They are ways of coordinating so that people don’t step on each other’s toes.
Not sure I completely believe that, but it is the theory I am trying on at the moment. :)
Those are the ones that are expected to be resistant to change.
Correct. My current claim is that almost all of our moral values are instrumental, and thus subject to change as society evolves. And I find the source of our moral values in an egoism which is made more effective by reciprocity and social convention.
The trouble is that there are multiple meanings of “moral values” here. There is the human instantiation, and the ideal decision agent instantiation. The ideal decision agent instantiation is used in 5. and a bit in 4. The human instantiation is used elsewhere.
Though usually these are pretty close and the approximation is useful, it can also run into trouble when you’re talking specifically about things humans do that ideal decision agents don’t do, and this is one of those things.
Specifically, 5. doesn’t necessarily work for human values, since we’re so inconsistent. People can go into isolation and just think and come out with different human values. How weird is that?!
I think you are right to call attention to the issue of drift.
Drift is bad in a simple value—at least in agents that consider temporal consistency to be a component of rationality. But drift can be acceptable in those ‘values’ which are valued precisely because they are conventions.
It is not necessarily bad for a teen-age subculture if their aesthetic values (on makeup, piercing, and hair) drift. As long as they don’t drift too fast so that nobody knows what to aim for.
Those are instrumental values. Nobody cares very much if those change, because they were just a means to an end in the first place.
My position here is roughly that all ‘moral’ values are instrumental in this sense. They are ways of coordinating so that people don’t step on each other’s toes.
Not sure I completely believe that, but it is the theory I am trying on at the moment. :)
Right—but there are surely also ultimate values.
Those are the ones that are expected to be resistant to change.
It can’t be instrumental values all the way down.
Correct. My current claim is that almost all of our moral values are instrumental, and thus subject to change as society evolves. And I find the source of our moral values in an egoism which is made more effective by reciprocity and social convention.
I think these guys) have a point. So, from my perspective, Egoism is badly named.
I mostly agree, but the argument still works if you throw out 5 altogether.
5 is the only is-ought link in the chain. Seems pretty integral to me.
I thought 4 and 5 were parallel, with 4 a bit stronger than 5.
But that’s only an “is” statement. To think “and that’s guaranteed to be bad” at the end of 4 is to assume 5.