Everything is inaccurate for some value of accurate. The point is you can’t arrive at an accurate definition without a good theory, and you can’t arrive at a good theory without an (inevitably inaccurate) definition.
I haven’t asserted that any definition of “Morality” can jump through the hoops
set up by NMJ and co. but there is an (averagely for Ordinary Language) inaccurate definition which is widely used.
The question in this thread was not “define Morality” but “explain how you determine which of “Killing innocent people is wrong barring extenuating circumstances” and “Killing innocent people is right barring extenuating circumstances” is morally right.”
(For people with other definitions of morality and / or other criteria for “rightness” besides morality, there may be other methods.)
The question was rather unhelpfully framed in Jublowskian terms of “observable consequences”. I think killing people is wrong because I don’t want to
be killed, and I don’t
want to Act on a Maxim I Would Not Wish to be Universal Law.
Because I’m trying to have a discussion with you about your beliefs?
Looking at this I find it hard to avoid concluding that you’re not interested in a productive discussion—you asked a question about how to answer a question, got an answer, and refused to answer it anyway. Let me know if you wish to discuss with me as allies instead of enemies, but until and unless you do I’m going to have to bow out of talking with you on this topic.
I believe murder is wrong. I believe you can figure that out if you don’t know it. The point of having a non-eliminative theory of ethics is that you want to find some way of supporting the common ethical intuitions. The point of asking questions is to demonstrate that it is possible to reason about morality: if someone answers the questions, they are doing the reasoning.
The point of having a non-eliminative theory of ethics is that you want to find some way of supporting the common ethical intuitions.
This seems problematic. If that’s the case, then your ethical system exists solely to support the bottom line. That’s just rationalizing, not actual thinking. Moreover, is doesn’t tell you anything helpful when people have conflicting intuitions or when you don’t have any strong intuition, and those are the generally interesting cases.
A system that could support any conclusion would be useless, and a system that couldn’t support the strongest and most common intuitions would be pretty incredible.
A system that doesn’t suffer from quodlibet isn’t going to support both of a pair of contradictory intuitions. And that’s pretty well the only way of resolving such issues. The rightness and wrongness of feelings can’t help.
So to make sure I understand, you are trying to make a system that agrees and supports with all your intuitions and you hope that the system will then give unambiguous answers where you don’t have intuitions?
I don’t think that you realize how frequently our intuitions clash, not just the intuitions of different people, but even one’s own intuitions (for most people at least). Consider for example train car problems. Most people whether or not they will pull the lever or push the fat person feel some intuition for either solution. And train problems are by far not the only example of a moral dilemma that causes that sort of issue. Many mundane, real life situations, such as abortion, euthanasia, animal testing, the limits of consent, and many other issues cause serious clashes of intuitions.
In this post: “How do you determine which one is accurate?”
In your response further down the thread: “I am not dodging [that question]. I am arguing that [it is] inappropriate to the domain [...]”
And then my post: “But you already have determined that one of them is accurate, right?”
That question was not one phrased in the way you object to, and yet you still haven’t answered it.
Though, at this point it seems one can infer (from the parent post) that the answer is something like “I reason about which principle is more beneficial to me.”
Any belief you have about the nature of reality, that does not inform your anticipations in any way, is meaningless. It’s like believing in a god which can never be discovered. Good for you, but if the universe will play out exactly the same as if it wasn’t there, why should I care?
Furthermore, why posit the existence of such a thing at all?
Any belief you have about the nature of reality, that does not inform your anticipations in any way, is meaningless.
On a tangent—I think the subjectivist flavor of that is unfortunate. You’re echoing Eliezer’s Making Beliefs Pay Rent, but the anticipations that he’s talking about are “anticipations of sensory experience”. Ultimately, we are subject to natural selection, so maybe a more important rent to pay than anticipation of sensory experiences, is not being removed from the gene pool. So we might instead say, “any belief you have about the nature of reality, that does not improve your chances of survival in any way, is meaningless.”
Elsewhere, in his article on Newcomb’s paradox, Eliezer says:
I don’t generally disagree with anything you wrote. Perhaps we miscommunicated.
“any belief you have about the nature of reality, that does not improve your chances of survival in any way, is meaningless.”
I think that would depend on how one uses “meaningless” but I appreciate wholeheartedly the sentiment that a rational agent wins, with the caveat that winning can mean something very different for various agents.
Moral beliefs aren’t beliefs about moral facts out there in reality, they are beliefs about what I should do next. “What should I do” is an orthogonal question to “what can I expect if I do X”. Since I can reason morally, I am hardly positing anything without warrant.
It doesn’t work like the empiricism you are used to because it is, in broad brush strokes, a different thing that solves a different problem.
Can you recognize that from my position it doesn’t work like the empiricism I’m used to because it’s almost entirely nonsensical appeals to nothing, arguing by definitions, and the exercising of the blind muscles of eld philosophy?
I am unpersuaded that there exists a set of correct preferences. You have, as far as I can see, made no effort to persuade me, but rather just repeatedly asserted that there are and asked me questions in terms that you refuse to define. I am not sure what you want from me in this case.
You may be entirely of the opinion that it is all stuff and nonsense: I am only interested in what can be rationally argued.
I don’t think you think it works like empiricism. I think you have tried to make it work
like empiricism and then given up. “I have a hammer in my hand, and it
won’t work on this ‘screw’ of yours, so you should discard it”.
People can and reason about what preferences they should have, and such reasoning can be as objective as mathematical reasoning, without the need for a special arena of objects.
Do you sincerely believe there is no difference? If not:: why not start by introspecting your own thinking on the subject?
Again, we come to this issue of not having a precise definition of “right” and “wrong”.
You’re dodging the questions that I asked.
I am not dodging them. I am arguing that they are inappropriate to the domain, and that not all definitions have to work that way.
But you already have determined that one of them is accurate, right?
Everything is inaccurate for some value of accurate. The point is you can’t arrive at an accurate definition without a good theory, and you can’t arrive at a good theory without an (inevitably inaccurate) definition.
It’s a problem to assert that you’ve determined which of A and B is accurate, but that there isn’t a way to determine which of A and B is accurate.
Edited to clarify: When I wrote this, the parent post started with the line “You say that like it’s a problem.”
I haven’t asserted that any definition of “Morality” can jump through the hoops set up by NMJ and co. but there is an (averagely for Ordinary Language) inaccurate definition which is widely used.
The question in this thread was not “define Morality” but “explain how you determine which of “Killing innocent people is wrong barring extenuating circumstances” and “Killing innocent people is right barring extenuating circumstances” is morally right.”
(For people with other definitions of morality and / or other criteria for “rightness” besides morality, there may be other methods.)
The question was rather unhelpfully framed in Jublowskian terms of “observable consequences”. I think killing people is wrong because I don’t want to be killed, and I don’t want to Act on a Maxim I Would Not Wish to be Universal Law.
My name is getting all sorts of U’s and W’s these days.
If there was a person who decided they did want to be killed, would killing become “right”?
Does he want everyone to die? Does he want to kill them against their wished? Are multiple agents going to converge on that opinion?
What are the answers under each of those possible conditions (or, at least, the interesting ones)?
Why do you need me to tell you? Under normal circumstances the normal “murder is worng” answer will obtain—that’s the point.
Because I’m trying to have a discussion with you about your beliefs?
Looking at this I find it hard to avoid concluding that you’re not interested in a productive discussion—you asked a question about how to answer a question, got an answer, and refused to answer it anyway. Let me know if you wish to discuss with me as allies instead of enemies, but until and unless you do I’m going to have to bow out of talking with you on this topic.
I believe murder is wrong. I believe you can figure that out if you don’t know it. The point of having a non-eliminative theory of ethics is that you want to find some way of supporting the common ethical intuitions. The point of asking questions is to demonstrate that it is possible to reason about morality: if someone answers the questions, they are doing the reasoning.
This seems problematic. If that’s the case, then your ethical system exists solely to support the bottom line. That’s just rationalizing, not actual thinking. Moreover, is doesn’t tell you anything helpful when people have conflicting intuitions or when you don’t have any strong intuition, and those are the generally interesting cases.
A system that could support any conclusion would be useless, and a system that couldn’t support the strongest and most common intuitions would be pretty incredible. A system that doesn’t suffer from quodlibet isn’t going to support both of a pair of contradictory intuitions. And that’s pretty well the only way of resolving such issues. The rightness and wrongness of feelings can’t help.
So to make sure I understand, you are trying to make a system that agrees and supports with all your intuitions and you hope that the system will then give unambiguous answers where you don’t have intuitions?
I don’t think that you realize how frequently our intuitions clash, not just the intuitions of different people, but even one’s own intuitions (for most people at least). Consider for example train car problems. Most people whether or not they will pull the lever or push the fat person feel some intuition for either solution. And train problems are by far not the only example of a moral dilemma that causes that sort of issue. Many mundane, real life situations, such as abortion, euthanasia, animal testing, the limits of consent, and many other issues cause serious clashes of intuitions.
I want a system that supports core intuitions. A consistent system can help to disambiguate intuitions.
And how do you decide which intuitions are “core intuitions”?
There’s a high degree of agreement about them. They seem particularly clear to me.
Can you give some of those? I’d be curious what such a list would look like.
eg., Murder, stealing
So what makes an intuition a core intuition and how did you determine that your intuitions about murder and stealing are core?
That’s a pretty short list.
In this post: “How do you determine which one is accurate?”
In your response further down the thread: “I am not dodging [that question]. I am arguing that [it is] inappropriate to the domain [...]”
And then my post: “But you already have determined that one of them is accurate, right?”
That question was not one phrased in the way you object to, and yet you still haven’t answered it.
Though, at this point it seems one can infer (from the parent post) that the answer is something like “I reason about which principle is more beneficial to me.”
Any belief you have about the nature of reality, that does not inform your anticipations in any way, is meaningless. It’s like believing in a god which can never be discovered. Good for you, but if the universe will play out exactly the same as if it wasn’t there, why should I care?
Furthermore, why posit the existence of such a thing at all?
On a tangent—I think the subjectivist flavor of that is unfortunate. You’re echoing Eliezer’s Making Beliefs Pay Rent, but the anticipations that he’s talking about are “anticipations of sensory experience”. Ultimately, we are subject to natural selection, so maybe a more important rent to pay than anticipation of sensory experiences, is not being removed from the gene pool. So we might instead say, “any belief you have about the nature of reality, that does not improve your chances of survival in any way, is meaningless.”
Elsewhere, in his article on Newcomb’s paradox, Eliezer says:
Survival is ultimate victory.
I don’t generally disagree with anything you wrote. Perhaps we miscommunicated.
I think that would depend on how one uses “meaningless” but I appreciate wholeheartedly the sentiment that a rational agent wins, with the caveat that winning can mean something very different for various agents.
Moral beliefs aren’t beliefs about moral facts out there in reality, they are beliefs about what I should do next. “What should I do” is an orthogonal question to “what can I expect if I do X”. Since I can reason morally, I am hardly positing anything without warrant.
You just bundled up the whole issue, shoved it inside the word “should” and acted like it had been resolved.
I have stated several times that the whole issue has not been resolved. All I’m doing at the moment is refuting your over-hasty generalisation that:
“morality doesn’t work like empirical prediction, so ditch the whole thing”.
It doesn’t work like the empiricism you are used to because it is, in broad brush strokes, a different thing that solves a different problem.
Can you recognize that from my position it doesn’t work like the empiricism I’m used to because it’s almost entirely nonsensical appeals to nothing, arguing by definitions, and the exercising of the blind muscles of eld philosophy?
I am unpersuaded that there exists a set of correct preferences. You have, as far as I can see, made no effort to persuade me, but rather just repeatedly asserted that there are and asked me questions in terms that you refuse to define. I am not sure what you want from me in this case.
Why should I accept your bald assertions here?
You may be entirely of the opinion that it is all stuff and nonsense: I am only interested in what can be rationally argued.
I don’t think you think it works like empiricism. I think you have tried to make it work like empiricism and then given up. “I have a hammer in my hand, and it won’t work on this ‘screw’ of yours, so you should discard it”.
People can and reason about what preferences they should have, and such reasoning can be as objective as mathematical reasoning, without the need for a special arena of objects.