So we need another word to filter out those kinds of somewhat-arbitrary proposed meta-ethical systems. “Objective” probably is not the best word for the job, but it is the only one I can think of right now.
This is where I stopped reading.
I suggest that you actually read the SEP entry on meta-ethics instead of just linking there—if you did read it, feel free to correct my guess. Metaethics does not mean what you said it did (metaethics is a theory of what morality is, not a way of comparing moralities), moral realism does not mean what you said it did (your belief that morality is a real thing out there constitutes moral realism), naturalistic metaethics do not mean what you said it did, CEV is totally not about convergence in all possible minds, etcetera. I also have to ask whether you read the Metaethics Sequence, but I mostly regard that sequence as having failed so I won’t be surprised if the answer is yes.
Metaethics Sequence, but I mostly regard that sequence as having failed
Has anyone reached what you regard as satisfactory level of understanding of your ideas as a result of reading the sequence? That is, does its failure refer to lower-than-wanted probability of a person reading the sequence understanding your ideas, or to an almost complete failure to communicate your ideas to anyone?
Without contradicting you in any way and with an acknowledgement that you could well disapprove of the way I think about morality too I’ll add that comprehension seems to have extended to the unaffiliated population. However both the rate and degree of comprehension is definitely much lower than for your core rationality material. Surprisingly so. However I have since formed an impression that the difficulties in thinking about morality extend far beyond just how your own posts are received.
As someone who apparently did not ‘get it’, I would suggest that there was a problem with clarity, and that the root cause of the lack of clarity was something that might be called ‘moral cognitive distance’. It frequently seemed that you were appealing to my moral intuitions, expecting them to be the same as yours. Pretty often they weren’t.
As far as I can tell, I got it. My evidence that I have it right is that I agree with you about it, and anything you’ve said based on your metaethics since I’ve understood it was not surprising to me.
I suggest that you actually read the SEP entry on meta-ethics instead of just linking there—if you did read it, feel free to correct my guess.
Good guess. If I have read it, it wasn’t within the last year. I will follow your advice and do so now.
Metaethics does not mean what you said it did (metaethics is a theory of what morality is, not a way of comparing moralities)
Poor choice of wording on my part. I meant to say that comparing moralities is one of the things that meta-ethics covers; that if you are engaged in comparing moralities, you are doing meta-ethics. Is this wrong?
moral realism does not mean what you said it did (your belief that morality is a real thing out there constitutes moral realism)
I didn’t understand this bit. Is the thing in parenthesis meant to exemplefy what I said, or is it your correction of what I said? If the latter, then you may have misunderstood what I said. My fault, no doubt.
I also have to ask whether you read the Metaethics Sequence, but I mostly regard that sequence as having failed so I won’t be surprised if the answer is yes.
Actually, I have read most of it, and I agree with your assessment. Where I understood it, I frequently disagreed.
I’m disappointed that my lack of scholarship in ethical philosophy was a barrier to your completing the reading of my posting. I will try to do better next time.
ETA: Until I have a chance to rewrite—I have placed the most muddled parts of my posting in a kind of ‘posted quarantine’ so that readers may skip over them, if they wish.
And I want to thank Eliezer for his critique—I neglected to do so in my initial response.
Poor choice of wording on my part. I meant to say that comparing moralities is one of the things that meta-ethics covers; that if you are engaged in comparing moralities, you are doing meta-ethics. Is this wrong?
I think it is. Comparing moralities is part of morality. Comparing meta-ethical claims such as moral realism, emotivism, error theory, relativism, etc. is meta-ethics, of course, but if you’re comparing object-level moral systems, like any of the various flavours of “utilitarianism” or any religion’s moral teachings or anything else, then you’re doing morality, not meta-ethics. True, you are asking “should” questions about how to answer “should” questions, which is rather meta, but that’s not the kind of meta that “meta-ethics” usually refers to.
(That’s not to say that meta-ethics is irrelevant to comparing moral systems — if you have a coherent meta-ethics, then it’ll probably inform your comparisons — but it’s not essential to the process.)
Poor choice of wording on my part. I meant to say that comparing moralities is one of the things that meta-ethics covers; that if you are engaged in comparing moralities, you are doing meta-ethics. Is this wrong?
I think it is. Comparing moralities is part of morality. …
Hmmm. I think you are right. At the risk of appearing really ridiculous, I now have to admit that I used poor wording in my confession above that I had used poor wording. What I really should have said is that if you are discussing the criteria that AIs might use in comparing moralities, as I did in the OP, then you are doing meta-ethics.
Just as another data point as far as the metaethics sequence:
Seemed to me to make sense, to “click” with me fairly well when I read it. (A couple bits perhaps were slower/tougher for me, like the injunction stuff and moral responsibility, but overall I feel that I grasped the ideas.)
Just to verify (to avoid (double) illusions of transparency), here’s my super hyper summarized understanding of it: Morality is objective, and humans happen (for various reasons) to be the sort of beings that actually care about morality, as opposed to caring about something else (like pebblesorting or paperclipping). Further, we indeed should be moral, where by “should”, I am appealing to, well, that particular standard known as “morality”. And similarly, it is indeed objectively better (that is, more moral) to be moral.
Further, morality includes such values as happiness, consciousness, novelty, self determination, etc...
(Of course, this skips subtleties like how we’re not fully reflective so it’s difficult for us to explicitly fully state the core underlying rules we use to judge morality, and the fact that those rules include rules for what sort of arguments to accept to update our present understanding, etc...)
Anyways, take that as a data point (plus or minus, depending on how well my understanding, as represented in the summary, reflects the actual intended concepts.)
This is where I stopped reading.
I suggest that you actually read the SEP entry on meta-ethics instead of just linking there—if you did read it, feel free to correct my guess. Metaethics does not mean what you said it did (metaethics is a theory of what morality is, not a way of comparing moralities), moral realism does not mean what you said it did (your belief that morality is a real thing out there constitutes moral realism), naturalistic metaethics do not mean what you said it did, CEV is totally not about convergence in all possible minds, etcetera. I also have to ask whether you read the Metaethics Sequence, but I mostly regard that sequence as having failed so I won’t be surprised if the answer is yes.
Has anyone reached what you regard as satisfactory level of understanding of your ideas as a result of reading the sequence? That is, does its failure refer to lower-than-wanted probability of a person reading the sequence understanding your ideas, or to an almost complete failure to communicate your ideas to anyone?
Well, it looks to me like SIAI core people got it, but there’s trouble being sure about that sort of thing.
Without contradicting you in any way and with an acknowledgement that you could well disapprove of the way I think about morality too I’ll add that comprehension seems to have extended to the unaffiliated population. However both the rate and degree of comprehension is definitely much lower than for your core rationality material. Surprisingly so. However I have since formed an impression that the difficulties in thinking about morality extend far beyond just how your own posts are received.
As someone who apparently did not ‘get it’, I would suggest that there was a problem with clarity, and that the root cause of the lack of clarity was something that might be called ‘moral cognitive distance’. It frequently seemed that you were appealing to my moral intuitions, expecting them to be the same as yours. Pretty often they weren’t.
As far as I can tell, I got it. My evidence that I have it right is that I agree with you about it, and anything you’ve said based on your metaethics since I’ve understood it was not surprising to me.
By “failed” do you mean the presentation didn’t get your ideas across, or do you think the ideas (or some of them) are wrong or incomplete?
Is there a do-over in the works? Is it covered in the upcoming book? What’s the next-best source of learning these ideas, if any?
Good guess. If I have read it, it wasn’t within the last year. I will follow your advice and do so now.
Poor choice of wording on my part. I meant to say that comparing moralities is one of the things that meta-ethics covers; that if you are engaged in comparing moralities, you are doing meta-ethics. Is this wrong?
I didn’t understand this bit. Is the thing in parenthesis meant to exemplefy what I said, or is it your correction of what I said? If the latter, then you may have misunderstood what I said. My fault, no doubt.
Actually, I have read most of it, and I agree with your assessment. Where I understood it, I frequently disagreed.
I’m disappointed that my lack of scholarship in ethical philosophy was a barrier to your completing the reading of my posting. I will try to do better next time.
ETA: Until I have a chance to rewrite—I have placed the most muddled parts of my posting in a kind of ‘posted quarantine’ so that readers may skip over them, if they wish. And I want to thank Eliezer for his critique—I neglected to do so in my initial response.
I think it is. Comparing moralities is part of morality. Comparing meta-ethical claims such as moral realism, emotivism, error theory, relativism, etc. is meta-ethics, of course, but if you’re comparing object-level moral systems, like any of the various flavours of “utilitarianism” or any religion’s moral teachings or anything else, then you’re doing morality, not meta-ethics. True, you are asking “should” questions about how to answer “should” questions, which is rather meta, but that’s not the kind of meta that “meta-ethics” usually refers to.
(That’s not to say that meta-ethics is irrelevant to comparing moral systems — if you have a coherent meta-ethics, then it’ll probably inform your comparisons — but it’s not essential to the process.)
Hmmm. I think you are right. At the risk of appearing really ridiculous, I now have to admit that I used poor wording in my confession above that I had used poor wording. What I really should have said is that if you are discussing the criteria that AIs might use in comparing moralities, as I did in the OP, then you are doing meta-ethics.
Is this wrong too?
Just as another data point as far as the metaethics sequence:
Seemed to me to make sense, to “click” with me fairly well when I read it. (A couple bits perhaps were slower/tougher for me, like the injunction stuff and moral responsibility, but overall I feel that I grasped the ideas.)
Just to verify (to avoid (double) illusions of transparency), here’s my super hyper summarized understanding of it: Morality is objective, and humans happen (for various reasons) to be the sort of beings that actually care about morality, as opposed to caring about something else (like pebblesorting or paperclipping). Further, we indeed should be moral, where by “should”, I am appealing to, well, that particular standard known as “morality”. And similarly, it is indeed objectively better (that is, more moral) to be moral.
Further, morality includes such values as happiness, consciousness, novelty, self determination, etc...
(Of course, this skips subtleties like how we’re not fully reflective so it’s difficult for us to explicitly fully state the core underlying rules we use to judge morality, and the fact that those rules include rules for what sort of arguments to accept to update our present understanding, etc...)
Anyways, take that as a data point (plus or minus, depending on how well my understanding, as represented in the summary, reflects the actual intended concepts.)