See Motivational internalism/externalism
(you might get better quality results if you asked specifically ‘is motivational internalism true?’ and provided that link; it’s basically the same as what you asked but less open to interpretation.)
My personal understanding is that motivational internalism is true in proportion to the level of systematization-preference of the agent. That is, for agents who spend a lot of time building and refining their internal meaning structures, motivational internalism is more true (for THEM, moral judgements tend to inherently motivate); in other cases, motivational externalism is true.
I have weak anecdotal evidence of this (and also of correlation of ‘moral judgements inherently compel me’ with low self worth—the ‘people who think they are bad work harder at being good’ dynamic.)
TL;DR: My impression is that motivational externalism is true by default (I answered ‘No’ to your poll); And motivational internalism is something that individual agents may acquire as a result of elaborating and maintaining their internal meaning structures.
I would argue that acquiring a degree of motivational internalism is beneficial to humans. but it’s probably unjustifiable to assume either that a) motivational internalism is beneficial to AIs, or b) if it is, then an AI will necessarily acquire it (rather than developing an alternative strategy, or nothing at all of the kind).
YES! I think this is exactly right: moral realism is not at odds with the orthogonality thesis; but the conjunction of moral realism with moral internalism is.
And it is this conjunction that many people seem to believe, although I cannot see why, because I can’t even imagine what it would mean for the world to be such that it is true. So I find it obvious that the conjunction isn’t true. It’s not quite so clear which of the conjuncts is false (if not perhaps both), though.
This is more of a question about what qualifies as a moral judgment. It’s possible to make moral judgments (under one definition) from the outside about other systems of morality or other people’s utility functions, e.g. “According to Christianity, masturbation is a sin” doesn’t motivate you to stop masturbating unless you firmly believe in Christianity, and “According to Bob’s utility function, he should donate more to charity” needn’t motivate you to donate more to charity. On the other hand, it’s impossible to believe “According to my moral system, I should do X” and not think X is the right thing for you to do.
On the one hand, nothing is necessarily true about an arbitrary mind, because nothing is true about all minds, for the same reason that there are no universally compelling arguments.
On the other hand, this is just another disagreement about what words refer to: someone who says “moral judgments necessarily motivate” is just saying “a judgement that does not motivate, does not fit my definition of moral”. This is not a fact about the world or about morality, it’s a fact about the way that person uses the words “moral judgment”.
If there is indeed wide disagreement on the answer to this question—I write this before voting and haven’t seen the results yet—then that is yet another argument for tabooing the word “morality”.
I think: Do ‘moral judgments necessarily motivate’? Is the key question here. [pollid:576]
See Motivational internalism/externalism (you might get better quality results if you asked specifically ‘is motivational internalism true?’ and provided that link; it’s basically the same as what you asked but less open to interpretation.)
My personal understanding is that motivational internalism is true in proportion to the level of systematization-preference of the agent. That is, for agents who spend a lot of time building and refining their internal meaning structures, motivational internalism is more true (for THEM, moral judgements tend to inherently motivate); in other cases, motivational externalism is true.
I have weak anecdotal evidence of this (and also of correlation of ‘moral judgements inherently compel me’ with low self worth—the ‘people who think they are bad work harder at being good’ dynamic.)
TL;DR: My impression is that motivational externalism is true by default (I answered ‘No’ to your poll); And motivational internalism is something that individual agents may acquire as a result of elaborating and maintaining their internal meaning structures.
I would argue that acquiring a degree of motivational internalism is beneficial to humans. but it’s probably unjustifiable to assume either that a) motivational internalism is beneficial to AIs, or b) if it is, then an AI will necessarily acquire it (rather than developing an alternative strategy, or nothing at all of the kind).
YES! I think this is exactly right: moral realism is not at odds with the orthogonality thesis; but the conjunction of moral realism with moral internalism is.
And it is this conjunction that many people seem to believe, although I cannot see why, because I can’t even imagine what it would mean for the world to be such that it is true. So I find it obvious that the conjunction isn’t true. It’s not quite so clear which of the conjuncts is false (if not perhaps both), though.
This is more of a question about what qualifies as a moral judgment. It’s possible to make moral judgments (under one definition) from the outside about other systems of morality or other people’s utility functions, e.g. “According to Christianity, masturbation is a sin” doesn’t motivate you to stop masturbating unless you firmly believe in Christianity, and “According to Bob’s utility function, he should donate more to charity” needn’t motivate you to donate more to charity. On the other hand, it’s impossible to believe “According to my moral system, I should do X” and not think X is the right thing for you to do.
On the one hand, nothing is necessarily true about an arbitrary mind, because nothing is true about all minds, for the same reason that there are no universally compelling arguments.
On the other hand, this is just another disagreement about what words refer to: someone who says “moral judgments necessarily motivate” is just saying “a judgement that does not motivate, does not fit my definition of moral”. This is not a fact about the world or about morality, it’s a fact about the way that person uses the words “moral judgment”.
If there is indeed wide disagreement on the answer to this question—I write this before voting and haven’t seen the results yet—then that is yet another argument for tabooing the word “morality”.
Is this a poll about whether moral judgements necessarily motivate, or whether that’s the key question?
Would people (especially those who haven’t read philosophical background) say what they think this question means. I suspect giant misinterpretation.