But the key notion is the idea that what we name by ‘right’ is a fixed question, or perhaps a fixed framework. We can encounter moral arguments that modify our terminal values, and even encounter moral arguments that modify what we count as a moral argument; nonetheless, it all grows out of a particular starting point. We do not experience ourselves as embodying the question “What will I decide to do?” which would be a Type 2 calculator; anything we decided would thereby become right. We experience ourselves as asking the embodied question: “What will save my friends, and my people, from getting hurt? How can we all have more fun? …” where the ”...” is around a thousand other things.
So ‘I should X’ does not mean that I would attempt to X were I fully informed.
Aghhhh this is so confusing. Now I’m left thinking both you and Wei Dai have furnished quotes supporting my position, User:thomblake has interpreted your quote as supporting his position, and neither User:thomblake nor User:gjm have replied to Wei Dai’s quote so I don’t know if they’d interpret it as evidence of their position too! I guess I’ll just assume I’m wrong in the meantime.
Relevant quote from Morality as Fixed Computation:
Thanks—I hope you’re providing that as evidence for my point.
Sort of. It certainly means he doesn’t define morality as extrapolated volition. (But maybe “equate” meant something looser than that?)
Aghhhh this is so confusing. Now I’m left thinking both you and Wei Dai have furnished quotes supporting my position, User:thomblake has interpreted your quote as supporting his position, and neither User:thomblake nor User:gjm have replied to Wei Dai’s quote so I don’t know if they’d interpret it as evidence of their position too! I guess I’ll just assume I’m wrong in the meantime.