By “metaethics,” do you mean something like “a theory of how humans should think about their values”?
I feel like I’ve seen that kind of usage on LW a bunch, but it’s atypical. In philosophy, “metaethics” has a thinner, less ambitious interpretation of answering something like, “What even are values, are they stance-independent, yes/no?”
And yeah, there is often a bit more nuance than that as you dive deeper into what philosophers in the various camps are exactly saying, but my point is that it’s not that common, and certainly not necessary, that “having confident metaethical views,” on the academic philosophy reading of “metaethics,” means something like “having strong and detailed opinions on how AI should go about figuring out human values.”
(And maybe you’d count this against academia, which would be somewhat fair, to be honest, because parts of “metaethics” in philosophy are even further removed from practicality, as they concern the analysis of the language behind moral claims, which, if we compare it to claims about the Biblical God and miracles, it would be like focusing way too much on whether the people who wrote the Bible thought they were describing real things or just metaphores, without directly trying to answer burning questions like “Does God exist?” or “Did Jesus live and perform miracles?”)
Anyway, I’m asking about this because I found the following paragraph hard to understand:
Behind a veil of ignorance, wouldn’t you want everyone to be less confident in their own ideas? Or think “This isn’t likely to be a subjective question like morality/values might be, and what are the chances that I’m right and they’re all wrong? If I’m truly right why can’t I convince most others of this? Is there a reason or evidence that I’m much more rational or philosophically competent than they are?”
My best guess of what you might mean (low confidence) is the following:
You’re conceding that morality/values might be (to some degree) subjective, but you’re cautioning people from having strong views about “metaethics,” which you take to be the question of not just what morality/values even are, but also a bit more ambitiously: how to best reason about them and how to (e.g.) have AI help us think about what we’d want for ourselves and others.
Is that roughly correct?
Because if one goes with the “thin” interpretation of metaethics, then “having one’s own metaethics” could be as simple as believing some flavor of “morality/values are subjective,” and it feels like you, in the part I quoted, don’t sound like you’re too strongly opposed to just that stance in itself, necessarily.
I have also noticed that when you read the word ”metaethics” on Lesswrong it can mean anything that is in some way related to morality.
Mayby I should take it upon myself to write a short essay on metaethics and how it differs from normative ethics and why it may be of importance to AI alignment.
By “metaethics,” do you mean something like “a theory of how humans should think about their values”?
I feel like I’ve seen that kind of usage on LW a bunch, but it’s atypical. In philosophy, “metaethics” has a thinner, less ambitious interpretation of answering something like, “What even are values, are they stance-independent, yes/no?”
And yeah, there is often a bit more nuance than that as you dive deeper into what philosophers in the various camps are exactly saying, but my point is that it’s not that common, and certainly not necessary, that “having confident metaethical views,” on the academic philosophy reading of “metaethics,” means something like “having strong and detailed opinions on how AI should go about figuring out human values.”
(And maybe you’d count this against academia, which would be somewhat fair, to be honest, because parts of “metaethics” in philosophy are even further removed from practicality, as they concern the analysis of the language behind moral claims, which, if we compare it to claims about the Biblical God and miracles, it would be like focusing way too much on whether the people who wrote the Bible thought they were describing real things or just metaphores, without directly trying to answer burning questions like “Does God exist?” or “Did Jesus live and perform miracles?”)
Anyway, I’m asking about this because I found the following paragraph hard to understand:
My best guess of what you might mean (low confidence) is the following:
You’re conceding that morality/values might be (to some degree) subjective, but you’re cautioning people from having strong views about “metaethics,” which you take to be the question of not just what morality/values even are, but also a bit more ambitiously: how to best reason about them and how to (e.g.) have AI help us think about what we’d want for ourselves and others.
Is that roughly correct?
Because if one goes with the “thin” interpretation of metaethics, then “having one’s own metaethics” could be as simple as believing some flavor of “morality/values are subjective,” and it feels like you, in the part I quoted, don’t sound like you’re too strongly opposed to just that stance in itself, necessarily.
I have also noticed that when you read the word ”metaethics” on Lesswrong it can mean anything that is in some way related to morality.
Mayby I should take it upon myself to write a short essay on metaethics and how it differs from normative ethics and why it may be of importance to AI alignment.