Imagine if humans had never broken into different groups and we all spoke the same language. No French, no English, just “the Language”. People study the Language, debate it, etc.
Then one day intelligent aliens arrive. Philosophers immediately begin debating: do these aliens have the Language? One the one hand, they’re making noises with what appears to be something comparable to a mouth, the noises have an order and structure to them, and they communicate information. But what they do sounds nothing like “the Language”. They refer to objects with different sounds than the Language requires, and sometimes make sounds that describe what an object is like after the sound that refers to the object.
“Morality” has a similar type-token ambiguity. It can refer to our values or to values in general. Saying Clippy knows what is moral but that he doesn’t care is true under the token interpretation, but not the type one. The word “morality” has meanings and connotations that imply that Clippy has a morality but that it is just different—in the same way that the aliens have language but that it is just different.
Isn’t it an optimization to code in the type, and let the .AI work out the details necessary to implement the token ? We don’t think theorem provers need to be overloaded with all known maths.
“But when you ask a question and someone provides an answer you don’t like, showing why that answer is wrong can sometimes be more effective than simply asserting that you don’t buy it”
I agree completely. Had I known in advance the quality of argument you would put up, I would not have wanted you to put it up, and would not have asked for one, in full compliance with this maxim. Lacking prescience, I didn’t know in advance, so I did want an argument, and I did ask for one, which fails to violate this maxim.
I’m afraid I have developed a sudden cognitive deficit that prevents me from understanding anything you are saying. I have also forgotten all the claims I have made, and what this discussion is about.
Imagine if humans had never broken into different groups and we all spoke the same language. No French, no English, just “the Language”. People study the Language, debate it, etc.
Then one day intelligent aliens arrive. Philosophers immediately begin debating: do these aliens have the Language? One the one hand, they’re making noises with what appears to be something comparable to a mouth, the noises have an order and structure to them, and they communicate information. But what they do sounds nothing like “the Language”. They refer to objects with different sounds than the Language requires, and sometimes make sounds that describe what an object is like after the sound that refers to the object.
“Morality” has a similar type-token ambiguity. It can refer to our values or to values in general. Saying Clippy knows what is moral but that he doesn’t care is true under the token interpretation, but not the type one. The word “morality” has meanings and connotations that imply that Clippy has a morality but that it is just different—in the same way that the aliens have language but that it is just different.
So, I guess the point of EY’s metaethics can be summarized as ‘by “morality” I mean the token, not the type’.
(Which is not a problem IMO, as there are unambiguous words for the type, e.g. “values”—except insofar as people are likely to misunderstand him.)
Especially because the whole point is to optimize for something. You can’t optimize for a type that could have any value.
Isn’t it an optimization to code in the type, and let the .AI work out the details necessary to implement the token ? We don’t think theorem provers need to be overloaded with all known maths.
Is this some kind of an NLP exercise?
FWIW, I’ve mostly concluded something along those lines.
You wrote
“But when you ask a question and someone provides an answer you don’t like, showing why that answer is wrong can sometimes be more effective than simply asserting that you don’t buy it”
..and I did..
Indeed. And?
If you don’t want someone to put up an argument, don’t ask t for it.
I agree completely.
Had I known in advance the quality of argument you would put up, I would not have wanted you to put it up, and would not have asked for one, in full compliance with this maxim.
Lacking prescience, I didn’t know in advance, so I did want an argument, and I did ask for one, which fails to violate this maxim.
You wanted an argument? Sorry this is “Insults”. Go down the hall and to the left. Monty Python (to my best recollection)
You want 12A, just along the corridor
I’m afraid I have developed a sudden cognitive deficit that prevents me from understanding anything you are saying. I have also forgotten all the claims I have made, and what this discussion is about.
In short, I’m tapping out.
?
There are immoral and amoral values, so no.