I believe that abstract moral intuitions do have motive power.
For example, I have never been a very good utilitarian because I am selfish, lazy,etc. However, if one year ago you had offered me the option of becoming a very good preference utilitarian, in an abstract context where my reflexes didn’t kick in, I would have accepted it. If you had given me the option to implement an AI which was a preference utilitarian (in some as-yet-never-made-sufficiently-concrete sense which seemed reasonable to me) I would have taken it.
I also am not sure what particular extreme behavior preference utilitarianism endorses when followed to its logical conclusion. I’m not aware of any extreme consequences which I hadn’t accepted (of course, I was protected from unwanted extreme consequences by the wiggle room of a grossly under-determined theory).
I believe that abstract moral intuitions do have motive power.
For example, I have never been a very good utilitarian because I am selfish, lazy,etc. However, if one year ago you had offered me the option of becoming a very good preference utilitarian, in an abstract context where my reflexes didn’t kick in, I would have accepted it. If you had given me the option to implement an AI which was a preference utilitarian (in some as-yet-never-made-sufficiently-concrete sense which seemed reasonable to me) I would have taken it.
I also am not sure what particular extreme behavior preference utilitarianism endorses when followed to its logical conclusion. I’m not aware of any extreme consequences which I hadn’t accepted (of course, I was protected from unwanted extreme consequences by the wiggle room of a grossly under-determined theory).