See Motivational internalism/externalism
(you might get better quality results if you asked specifically ‘is motivational internalism true?’ and provided that link; it’s basically the same as what you asked but less open to interpretation.)
My personal understanding is that motivational internalism is true in proportion to the level of systematization-preference of the agent. That is, for agents who spend a lot of time building and refining their internal meaning structures, motivational internalism is more true (for THEM, moral judgements tend to inherently motivate); in other cases, motivational externalism is true.
I have weak anecdotal evidence of this (and also of correlation of ‘moral judgements inherently compel me’ with low self worth—the ‘people who think they are bad work harder at being good’ dynamic.)
TL;DR: My impression is that motivational externalism is true by default (I answered ‘No’ to your poll); And motivational internalism is something that individual agents may acquire as a result of elaborating and maintaining their internal meaning structures.
I would argue that acquiring a degree of motivational internalism is beneficial to humans. but it’s probably unjustifiable to assume either that a) motivational internalism is beneficial to AIs, or b) if it is, then an AI will necessarily acquire it (rather than developing an alternative strategy, or nothing at all of the kind).
See Motivational internalism/externalism (you might get better quality results if you asked specifically ‘is motivational internalism true?’ and provided that link; it’s basically the same as what you asked but less open to interpretation.)
My personal understanding is that motivational internalism is true in proportion to the level of systematization-preference of the agent. That is, for agents who spend a lot of time building and refining their internal meaning structures, motivational internalism is more true (for THEM, moral judgements tend to inherently motivate); in other cases, motivational externalism is true.
I have weak anecdotal evidence of this (and also of correlation of ‘moral judgements inherently compel me’ with low self worth—the ‘people who think they are bad work harder at being good’ dynamic.)
TL;DR: My impression is that motivational externalism is true by default (I answered ‘No’ to your poll); And motivational internalism is something that individual agents may acquire as a result of elaborating and maintaining their internal meaning structures.
I would argue that acquiring a degree of motivational internalism is beneficial to humans. but it’s probably unjustifiable to assume either that a) motivational internalism is beneficial to AIs, or b) if it is, then an AI will necessarily acquire it (rather than developing an alternative strategy, or nothing at all of the kind).