those are perfectly coherent and sound for those who entertain them, we should though do not call them “Clippy’s, Elves’ or Pebblesorters’ morality”, because words should be used in such a way to maximize their usefulness in carving reality: since we cannot go out of our programming and conceivably find ourselves motivated by eggnog or primality, we should not use the term and instead use primality or other words.
So my car is a car becuse it motor-vates me, but your car is no car at all, because it motor-vates you around, but not me. And yo mama ain’t no Mama cause she ain’t my Mama!
Yudkowsky isn’t being rigourous, he is instead appealing to an imaginary rule, one that is not seen in any other case.
And it’s not like the issue isn’t important, either .. obviously the premissibility of imposing ones values on others depends on whether they are immoral, amoral, differently moral , etc. Differrently moral is still a possibilirt, for the reasons that you are differently mothered, not unmohtered.
So my car is a car becuse it motor-vates me, but your car is no car at all, because it motor-vates you around, but not me.
The difference is not between two cars, yours and mine, but between a passegner ship and a cargo ship, built for two different purpose and two different class of users.
Yudkowsky isn’t being rigourous, he is instead appealing to an imaginary rule, one that is not seen in any other case.
On this we surely agree, I just find the new rule better than the old one. But this is the least important part of the whole discussion.
obviously the premissibility of imposing ones values on others depends on whether they are immoral, amoral, differently moral , etc. Differrently moral is still a possibilirt, for the reasons that you are differently mothered, not unmohtered.
This is well explored in “Three worlds collide”. Yudkowski vision of morality is such that it assigns different morality to different aliens, and the same morality to the same species (I’m using your convention). When different worlds collide, it is moral for us to stop babyeaters from eating babies, and it is moral for the superhappy to happify us. I think Eliezer is correct in showing that the only solution is avoiding contact at all.
The difference is not between two cars, yours and mine, but between a passegner ship and a cargo ship, built for two different purpose and two different class of users.
That seems different to what you were saying before.
This is well explored in “Three worlds collide”. Yudkowski vision of morality is such that it assigns different morality to different aliens, and the same morality to the same species (I’m using your convention). When different worlds collide, it is moral for us to stop babyeaters from eating babies, and it is moral for the superhappy to happify us. I think Eliezer is correct in showing that the only solution is avoiding contact at all.
There’s not much objectivity in that.
Why is it so important that our morality is the one that motivates us? People keep repeating it as though its a great revelation, but its equally true that babyeater morality motivates babyeaters, so the situation comes out looking symmetrical and therefore relativistc.
Maybe we should be abandoning the objectivity requirement as impossible. As I understand it this is in fact core to Yudkowsky’s theory- an “objective” morality would be the tablet he refers to as something to ignore.
I’m not entirely on Yudkowsky’s side in this. My view is that moral desires, whilst psychologically distinct from selfish desires, are not logically distinct and so the resolution to any ethical question is “What do I want?”. There is the prospect of coordination through shared moral wants, but there is the prospect of coordination through shared selfish wants as well. Ideas of “the good of society” or “objective ethical truth” are simply flawed concepts.
But I do think Yudkowsky has a good point both of you have been ignoring. His stone tablet analogy, if I remember correctly, sums it up.
“I think Eliezer is correct in showing that the only solution is avoiding contact at all.”: Assumes that there is such a thing as an objective solution, if implicitly.
“The difference is not between two cars, yours and mine, but between a passegner ship and a cargo ship, built for two different purpose and two different class of users.”: Passenger and cargo ships both have purposes within human morality. Alien moralities are likely to contradict each other.
“There’s not much objectivity in that.”: What if objectivity in the sense you describe is impossible?
“Why is it so important that our morality is the one that motivates us? People keep repeating it as though its a great revelation, but its equally true that babyeater morality motivates babyeaters, so the situation comes out looking symmetrical and therefore relativistc.”: If it isn’t, then it comes back to the amoralist challenge. Why should we even care?
Maybe we should be abandoning the objectivity requirement as impossible.
Maybe we should also consider in parallel the question of whether objectivity is necessary. If objectivity is both necessary to morality and impossible, then nihilism results.
The basic, pragmatic argument for the objectivity or quasi-objectivity of ethics is that it is connected to practices of reward and punishment, which either happen or not.
As I understand it this is in fact core to Yudkowsky’s theory- an “objective” morality would be the tablet he refers to as something to ignore.
I’m not entirely on Yudkowsky’s side in this. My view is that moral desires, whilst psychologically distinct from selfish desires, are not logically distinct and so the resolution to any ethical question is “What do I want?”.
if you are serious about the unselfish bit, then surely it boils down to “what do they want” or “what do we want”.
What if objectivity in the sense you describe is impossible?
i don’t accept the Moral Void argument, for the reasons given. Do you have another?
If it isn’t, then it comes back to the amoralist challenge. Why should we even care?
The idea that humans are uniquely motivated by human morality isn’t put forward as a an answer to the amoralist challenge, it is put forward as a a way of establishing something like moral objectivism.
So my car is a car becuse it motor-vates me, but your car is no car at all, because it motor-vates you around, but not me. And yo mama ain’t no Mama cause she ain’t my Mama!
Yudkowsky isn’t being rigourous, he is instead appealing to an imaginary rule, one that is not seen in any other case.
And it’s not like the issue isn’t important, either .. obviously the premissibility of imposing ones values on others depends on whether they are immoral, amoral, differently moral , etc. Differrently moral is still a possibilirt, for the reasons that you are differently mothered, not unmohtered.
The difference is not between two cars, yours and mine, but between a passegner ship and a cargo ship, built for two different purpose and two different class of users.
On this we surely agree, I just find the new rule better than the old one. But this is the least important part of the whole discussion.
This is well explored in “Three worlds collide”. Yudkowski vision of morality is such that it assigns different morality to different aliens, and the same morality to the same species (I’m using your convention). When different worlds collide, it is moral for us to stop babyeaters from eating babies, and it is moral for the superhappy to happify us. I think Eliezer is correct in showing that the only solution is avoiding contact at all.
That seems different to what you were saying before.
There’s not much objectivity in that.
Why is it so important that our morality is the one that motivates us? People keep repeating it as though its a great revelation, but its equally true that babyeater morality motivates babyeaters, so the situation comes out looking symmetrical and therefore relativistc.
Maybe we should be abandoning the objectivity requirement as impossible. As I understand it this is in fact core to Yudkowsky’s theory- an “objective” morality would be the tablet he refers to as something to ignore.
I’m not entirely on Yudkowsky’s side in this. My view is that moral desires, whilst psychologically distinct from selfish desires, are not logically distinct and so the resolution to any ethical question is “What do I want?”. There is the prospect of coordination through shared moral wants, but there is the prospect of coordination through shared selfish wants as well. Ideas of “the good of society” or “objective ethical truth” are simply flawed concepts.
But I do think Yudkowsky has a good point both of you have been ignoring. His stone tablet analogy, if I remember correctly, sums it up.
“I think Eliezer is correct in showing that the only solution is avoiding contact at all.”: Assumes that there is such a thing as an objective solution, if implicitly.
“The difference is not between two cars, yours and mine, but between a passegner ship and a cargo ship, built for two different purpose and two different class of users.”: Passenger and cargo ships both have purposes within human morality. Alien moralities are likely to contradict each other.
“There’s not much objectivity in that.”: What if objectivity in the sense you describe is impossible?
“Why is it so important that our morality is the one that motivates us? People keep repeating it as though its a great revelation, but its equally true that babyeater morality motivates babyeaters, so the situation comes out looking symmetrical and therefore relativistc.”: If it isn’t, then it comes back to the amoralist challenge. Why should we even care?
Maybe we should also consider in parallel the question of whether objectivity is necessary. If objectivity is both necessary to morality and impossible, then nihilism results.
The basic, pragmatic argument for the objectivity or quasi-objectivity of ethics is that it is connected to practices of reward and punishment, which either happen or not.
The essential problem with the tablet is that it offers conclusions as a fait accompli, with no justification of argument. The point does not generalise against objectivity morality.
if you are serious about the unselfish bit, then surely it boils down to “what do they want” or “what do we want”.
i don’t accept the Moral Void argument, for the reasons given. Do you have another?
The idea that humans are uniquely motivated by human morality isn’t put forward as a an answer to the amoralist challenge, it is put forward as a a way of establishing something like moral objectivism.