You seem to misunderstand most of my beliefs, so I’ll try to address that first before I go any further to avoid confusion.
But those “objective” facts would only be about the intuitions of individual minds,
No. Just no. No no no no no no no no no no no no no. NO! NO!
The objective fact is that there is a brain made mostly of neurons and synapses and blood and other kinds of juicy squishyness inside which a certain bundle of those synapses is set in a certain particularly complex (as far as we know) arrangement, and when something is sent as input to that bundle of synapses of the form “Kill this child?”, the bundle sends queries to other bundles: “Benefits?” “People who die if child lives?” “Hungry?” “Have we had sex recently?” “Is the child real?” etc.
Then, an output is produced, “KILLING CHILD IS WRONG” or “KILLING CHILD IS OKAY HERE”.
Human consciousnesses, the “you” that is you and that wouldn’t randomly decide to start masturbating in public while sleepwalking (you don’t want to be the guy whom that happened to, seriously), doesn’t have access to the whole thing that the bundle of synapses called “morality” inside the brain actually does. It only has output, and sometimes glimpses of some of the queries that the bundle sent to other bundles.
In other words, intuitions.
What I refer to as an “objective fact”, the “objective” morality of that individual, is the entire sum of the process, the entire bundle + reviewing by conscious mind on each individual process + what the conscious mind would want to fix in order to be even more moral by the morals of the same bundle of synapses (i.e. self-reflectivity). The exact “objective morality” of each human is a complicated thing that I’m not even sure I grasp entirely and can describe adequately, but I’m quite certain that it is not limited to intuitions and that those intuitions are not entirely accurate.
Same problem. A thinks it is moral to kill B, B thinks it is not moral to be killed by A. Where is the objective moral fact there? (...)
The “objective moral fact” (to use your words), in this toy problem, is that IF AND ONLY IF A is correct when A thinks it is moral for A’s morality system to kill B, and B is correct when B thinks it is moral for B’s system to not be killed by A, then and only then it is moral for A to kill B and it is moral for B to not be killed by A. There is no contradictions, the universe is just fucked up and lets shit like this happen.
(...) Objective moral facts (or at least intersubjective ones) need to resolve conflicts between individuals. You have offered nothing that can do that.. Morality cannot just be a case of what an individual should do, because indiviuals interact.
What? No. First, that’s called ethics, the thing about how individuals should interact. The reason ethics is hard is because each individual has a slightly different morality, but the reason it’s feasible at all is because most humans are fairly similar even in this.
Most humans, when faced with the toy problem of saving ten young lives versus three old ones, will save the ten young. Most humans, when they see a child get horribly mutilated or have their flesh melt off of their bones, will be revolted and feel that this is many kinds of Very Wrong.
For most humans, if they have a small something they value a little bit, but that if they give it up temporarily they know they can make another human’s morality become much much better, while if they stick to keeping their small something to themselves that human will feel horribly wronged, will give up that little bit for the benefit of the other human’s morality.
This seems to indicate that most humans have a component, somewhere in this bundle of synapses, that tries to estimate what the other bundles of synapses in other brains are doing, so as to not upset them too much. This is also part of what helps ethics be feasible at all.
Then morlaity is not so objective that it is graven into the very fabric of the universe. The problem remains that what you have presented is too subjective to do anything useful. By all means present a theory of human morality that is indexed to humans, but let it regulate interactions between humans.
I don’t even understand what you’re getting at. I’m not trying to come up with a system of norms that tells everyone what they should do to interact with other humans. How is it too subjective to be useful?
I’ve merely presented my current conclusions, the current highest-probability results of computing together all the evidence available to me. These are guesses and tentative assessments of reality, an attempt at approximating and describing what actually goes on out there in human brains that gives rise to humans talking about morality and not wanting to coat children with burning napalm. (sorry if this strikes political chords, I can’t think of a better example of something public-knowledge that the vast majority of humans who learned about it described as clearly wrong)
As for being “too subjective to do anything useful”… what? If I tell you that two cars have different engines, so you can’t use the exact same mathematical formula for calculating their velocity and traveled distance as they accelerate, is this useless subjective information? Because what I’m saying is that humans have different engines in terms of morality, and while like the car engines they have major similarities in the logical principles involved and how they operate, there are key differences that must be taken into consideration to produce any useful discussion about the velocities and positions of each car.
That is hard to inpterpret. Why should opinions be what is “objectively moral”? You might mean there is nothing more to morality than people’s jugements about what is good or bad, but that is not an objective feature of the universe, it is mind projection. That the neural mechanisms involved are objective does not make what is projected by them objective. If objective neural activity makes me dream of unicorns, unicorns are not thereby objective.
Apologies for being unclear. Opinions are not what is objectively moral, I was saying that the bundle of synapses I described above is both the main part of what is objectively moral (well, the algorithms implemented by the synapses anyway), and what comes out of the bundle of synapses is also what generates the opinions. They are correlated, but not perfectly so, let alone equivalent/equal.
So more often than not, one’s opinion that it is wrong to suddenly start killing and pillaging everyone in the nearest city is a correct assessment about their own morality. On average, most clear-cut moral judgments will be fairly accurate, because they come out of the same algorithms in different manners.
The latter two sentences of this last quote seem to aptly rephrase exactly what I was trying to say. The are objective algorithms and mechanisms in the bundles of nerves, but just because the conscious mind is getting a rough idea of what it thinks they might be doing after having a “KILLING CHILD IS WRONG” output a hundred times, the output still doesn’t have access to the whole thing, and even if it did there are things one would want to correct in order to avoid errors due to bias.
I can’t really be more precise or confident in exactly what is morality in a human’s brain, because I haven’t won five nobels in breakthrough neurobiology, philosophy, peace, ethics and psychology. I think that’s about the minimum award that would go to someone who had entirely solved and located exactly everything that makes humans moral and exactly how it works.
“We” individually, or “we” collectively? That is a very important point to skate over.
The ambiguity is appropriate, though unintentional. The first response is “we” individually, but to some extent there are many things that all humans find moral, and many more things that most humans find moral. Again the example of napalm-flavored youngsters.
So each of us has a separate algorithm, but if you were to examine them all individually, you could probably (with enough effort and smarts) come up with an algorithm that finds moral only what all humans find moral, or finds moral whatever at least 60% of humans find moral, or some other filtering or approximation.
To example you, “2x − 6” will return a positive number as long as x > 3 (let’s not count zero). Similarly, “3x − 3“ will return positive as long as x > 1. If positive numbers represent a “This is moral and good” output, then clearly they’re not the same morality. However, “x > 3” will guarantee a space of solutions that both moralities find moral and favorable.
(two-part comment, see above or below for the rest)
The exact “objective morality” of each human is a complicated thing that I’m not even sure I grasp entirely and can describe adequately, but I’m quite certain that it is not limited to intuitions and that those intuitions are not entirely accurate
That’s still not the point. The entire bundle still isn’t Objective Morality, because the entire bundle is still insie one person’s head. Objective morality is what all ideal agents would converge on.
The “objective moral fact” (to use your words), in this toy problem, is that IF AND ONLY IF A is correct when A thinks it is moral for A’s morality system to kill B, and B is correct when B thinks it is moral for B’s system to not be killed by A, then and only then it is moral for A to kill B and it is moral for B to not be killed by A. There is no contradictions, the universe is just fucked up and lets shit like this happen.
The way you have expressed this is contradiictory. You said “it is moral”, simpliciter, rather than, it is moral-for-A, but immora-for-B. Although to do that would have made ii obvious you are talking about subjective morality. And no, it isn’t the universes fault fault. The universe allows agents to have contradictory and incompatible impulses, but it is you choice to call those implulses “moral” despite the fact that they don’t
resole conflicts, or take others’ interestes into account. I wouldn’t call them that. I think the contraiction means
at least one of the agent’s I-think-this-is-moral beliefs is wrong
What? No. First, that’s called ethics, the thing about how individuals should interact
I don’t think so
Ethics ” Moral principles that govern a person’s or group’s behavior.”
“1.
( used with a singular or plural verb ) a system of moral principles: the ethics of a culture.
2.
the rules of conduct recognized in respect to a particular class of human actions or a particular group, culture, etc.: medical ethics; Christian ethics.
3.
moral principles, as of an individual: His ethics forbade betrayal of a confidence.
4.
( usually used with a singular verb ) that branch of philosophy dealing with values relating to human conduct, with respect to the rightness and wrongness of certain actions and to the goodness and badness of the motives and ends of such actions. ”
I don’t even understand what you’re getting at. I’m not trying to come up with a system of norms that tells everyone what they should do to interact with other humans.
Then what are you doing? The observation that facts about brains a relevant to descriptive ethics is rather obvious.
If I tell you that two cars have different engines, so you can’t use the exact same mathematical formula for calculating their velocity and traveled distance as they accelerate, is this useless subjective information?
If you allow indiiviudal drivers to choose which side of the road to drive on, you have a uselessly subjective system of traffic law.
So more often than not, one’s opinion that it is wrong to suddenly start killing and pillaging everyone in the nearest city is a correct assessment about their own morality.
Their own something. I don’t think you are going to convince an error theorist that morality exists by showing
them brain scans. And the terms “consicience” and “superego” cover internal regulation of behaviour without prejudice to the philosophical issues.
So each of us has a separate algorithm, but if you were to examine them all individually, you could probably (with enough effort and smarts) come up with an algorithm that finds moral only what all humans find moral, or finds moral whatever at least 60% of humans find moral, or some other filtering or approximation.
Has no bearing on the philosophy, again. All you have their is the intersection of a set of tablets.
That’s still not the point. The entire bundle still isn’t Objective Morality, because the entire bundle is still insie one person’s head. Objective morality is what all ideal agents would converge on.
Okay. That is clearly a word problem, and you are arguing my definition.
The way you have expressed this is contradiictory. You said “it is moral”, simpliciter, rather than, it is moral-for-A, but immora-for-B. Although to do that would have made ii obvious you are talking about subjective morality. And no, it isn’t the universes fault fault. The universe allows agents to have contradictory and incompatible impulses, but it is you choice to call those implulses “moral” despite the fact that they don’t resole conflicts, or take others’ interestes into account. I wouldn’t call them that. I think the contraiction means at least one of the agent’s I-think-this-is-moral beliefs is wrong
You assumed I was being deliberately sophistic and creating confusion on purpose. After I explicitly requested twice that things be interpreted the other way around where possible. I thought that it was very clear from context that what I meant was that:
IFF It is moral-A that A kills B && It is moral-B that B is not killed by A && There are no other factors influencing moral-A or moral-B THEN: It is moral for A that A kills B and it is likewise moral for B to not be killed by A. Let the fight begin.
If you allow indiiviudal drivers to choose which side of the road to drive on, you have a uselessly subjective system of traffic law.
Please stop this. I’m seeing more and more evidence that you’re deliberately ignoring my arguments and what I’m trying to say, and that you’re just equating everything I say with “This is not a perfect system of normative ethics, therefore it is worthless”.
I have a hard time even inferring what you mean by this rather irrelevant-seeming metaphor. I’m not talking about laws and saying “The law should only punish those that act against their intuitions of morality, oh derp!”—I’m not even talking about justice or legal systems or ideal societies at all! Have I somewhere accidentally made the claim that we should just let every single human build their own model of their own system of morality with incomplete information and let chaos ensue?
Their own something. I don’t think you are going to convince an error theorist that morality exists by showing them brain scans. And the terms “consicience” and “superego” cover internal regulation of behaviour without prejudice to the philosophical issues.
Yes. And in case that wasn’t painfully obvious yet, this “something” of their own is exactly what I mean to say when I use the word “morality”!
I’m not attempting to convince anyone that “morality” “exists”. To engage further on this point I would necessitate those two to be tabooed, because I honestly have no idea what you’re getting at or what you even mean by that sentence or the one after it.
Has no bearing on the philosophy, again. All you have their is the intersection of a set of tablets.
Yup. If I agree to use your words, then yes. There’s an intersection of a set of tablets. These tablets give us some slightly iffy commandments that even the owner of the tablet would want to fix. The counterfactual edited version of the tablet after the owner has made the fixes, checked again to see if they want to fix anything, and are happy with the result, is exactly what I am pointing at here. I’ve used the words “objective morality” and “true moral preferences” and “moral algorithms” before, and all of those were pointing exactly at this. Yes, I claim that there’s nothing else here, move along.
If you want to have something more, some Objective Morality (in the sense you seem to be using that term) from somewhere else, humans are going to have to invent it. And either it’s going to be based on an intersection of edited tablets, or a lot of people are going to be really unhappy.
That is clearly a word problem, and you are arguing my definition.
I can see that it is a word problem, and I woud argue that anyone would be hard pressed to guess what you meant by “objective moral facts”.
It is moral for A that A kills B and it is likewise moral for B to not be killed by A. Let the fight begin.
What fight? You have added the “for A” and “for B” clauses that were missing last time. Are you hilding me to blame for taking you at your word?
Really? You’re going there?
You claimed a distinction in meaning between “morality” and “ethics” that doesn’t exist. Pointing that out
is useful for clarity of communication. It was not intended to prove anything at the object level.
I have a hard time even inferring what you mean by this rather irrelevant-seeming metaphor. I’m not talking about laws and saying “The law should only punish those that act against their intuitions of morality, oh derp!”—I’m not even talking about justice or legal systems or ideal societies at all! Have I somewhere accidentally made the claim that we should just let every single human build their own model of their own system of morality with incomplete information and let chaos ensue?
I don’t know how accidental it was , but your “moral for A” and “moral for B” comment does suggest that
two people can in contradiciton and yet both right.
Yes. And in case that wasn’t painfully obvious yet, this “something” of their own is exactly what I mean to say when I use the word “morality”!
I am totally aware of that. But you don’t get to call anything by any word. I was challenging the appriopriateness of making substantive claims based on a naming ceremony.
I’m not attempting to convince anyone that “morality” “exists”.
You said there were objective facts about it!
Yup. If I agree to use your words, then yes. There’s an intersection of a set of tablets. These tablets give us some slightly iffy commandments that even the owner of the tablet would want to fix. The counterfactual edited version of the tablet after the owner has made the fixes, checked again to see if they want to fix anything, and are happy with the result, is exactly what I am pointing at here. I’ve used the words “objective morality” and “true moral preferences” and “moral algorithms” before, and all of those were pointing exactly at this. Yes, I claim that there’s nothing else here, move along.
You haven’t explained that or how or why different individuals would converge on a single objective reality
by refining their intuitions. And no, EY doesn’t either.
If you want to have something more, some Objective Morality (in the sense you seem to be using that term) from somewhere else, humans are going to have to invent it.
if they haven’t already.
And either it’s going to be based on an intersection of edited tablets, or a lot of people are going to be really unhappy.
So values and intuitions are a necessary ingredient. Any number of others could be as well.
I can see that it is a word problem, and I woud argue that anyone would be hard pressed to guess what you meant by “objective moral facts”.
If individual moralities have enough of a common component that we can point to principles and values that are widely-shared among living people and societies, that would certainly count as a “fact” about morality, which we could call a “moral fact”. And that fact is certainly “objective” from the POV of any single individual, although it’s not objective at all in the naïve Western sense of “objectivity” or God’s Eye View.
You claimed a distinction in meaning between “morality” and “ethics” that doesn’t exist.
Dictionary definitions are worthless, especially in specialized domains. Does a distinction between “morality” and “ethics” (or even between “descriptive morality” and “normative morality”, if you’re committed to hopelessly confused and biased naming choices by academic philosophers) cut reality at its joints? I maintain that it does.
If individual moralities have enough of a common component that we can point to principles and values that are widely-shared among living people and societies, that would certainly count as a “fact” about morality, which we could call a “moral fact”. And that fact is certainly “objective” from the POV of any single individual, although it’s not objective at all in the naïve Western sense of “objectivity” or God’s Eye View.
And it is stll not an objective moral fact in the sense of Moral Objectivism, in the sense of a first-order fact
that makes some moral propositions mind independently true. It’s a second order fact.
Dictionary definitions are worthless, especially in specialized domains.
I’ve never seen that distinction in the specialised domain in question.
And it is stll not an objective moral fact in the sense of Moral Objectivism, in the sense of a first-order fact that makes some moral propositions mind independently true.
I don’t think that’s a coincidence. Whether there is some kind of factual (e.g. biological) base for morality is an interesting question, but it’s generally a question for psychology and science, not philosophy. People who try to argue for such a factual basis in a naïve way usually end up talking about something very different than what we actually mean by “morality” in the real world. For an unusually clear example, see Ayn Rand’s moral theory, incidentally also called “Objectivism”.
Just got bashed several times, while presenting the fragility of values idea in Oxford, for using the term “descriptive morality”. I was almost certain Eliezer used the term, hence, I was blaming him for my bashing. But it seems he doesn’t, and the above comment is the solely instance of the term I could find. I’m blaming you them! Not really though, it seems I’ve invented this term on my own—and I’m not proud of it. So far, I’ve failed to find a correlated term either in meta-ethics or in the Sequences. In my head, I was using it to mean what would be the 0 step for CEV. It could be seen as the object of study of descriptive ethics (a term that does exist), but it seems descriptive ethics uses a pluralistic or relativistic view, while I needed a term to describe the morality shared by all humans.
Just got bashed several times, while presenting the fragility of values idea in Oxford, for using the term “descriptive morality”.
So it’s even worse than I thought? When ethicists do any “descriptive” research, they are studying morality, whether they care to admit it or not. The problem with calling such things “ethics” is not so much that it implies a pluralist/relativist view—if anything, it makes the very opposite mistake: it does not take moralities seriously enough, as they exist in the real world. In common usage, the term “ethics” is only appropriate for very broadly-shared values (of course, whether such values exist after all is an empirical question), or else for the kind of consensus-based interplay of values or dispute resolution that we all do when we engage in ethical (or even moral!) reasoning in the real world.
And that fact is certainly “objective” from the POV of any single individual, although it’s not objective at all in the naïve Western sense of “objectivity” or God’s Eye View.
Sooo, not objective then. Definition debates are stupid, but there is no reason at all to be this loose with language. Seriously, this reads like a deconstructionist critique of a novel from an undergraduate majoring in English. Complete with scare quotes around words that are actually terms of art.
Well, yes. I’m using scare quotes around the terms “objective” and “fact”, precisely to point out that I am using them in a more general way than the term of art is usually defined. Nonetheless, I think this is useful, since it may help dissolve some philosophical questions and perhaps show them to be ill-posed or misleading.
Needless to say, I do not think this is “being loose with language”. And yes, sometimes I adopt a distinctive writing style in order to make a point as clearly as possible.
You seem to misunderstand most of my beliefs, so I’ll try to address that first before I go any further to avoid confusion.
No. Just no. No no no no no no no no no no no no no. NO! NO!
The objective fact is that there is a brain made mostly of neurons and synapses and blood and other kinds of juicy squishyness inside which a certain bundle of those synapses is set in a certain particularly complex (as far as we know) arrangement, and when something is sent as input to that bundle of synapses of the form “Kill this child?”, the bundle sends queries to other bundles: “Benefits?” “People who die if child lives?” “Hungry?” “Have we had sex recently?” “Is the child real?” etc.
Then, an output is produced, “KILLING CHILD IS WRONG” or “KILLING CHILD IS OKAY HERE”.
Human consciousnesses, the “you” that is you and that wouldn’t randomly decide to start masturbating in public while sleepwalking (you don’t want to be the guy whom that happened to, seriously), doesn’t have access to the whole thing that the bundle of synapses called “morality” inside the brain actually does. It only has output, and sometimes glimpses of some of the queries that the bundle sent to other bundles.
In other words, intuitions.
What I refer to as an “objective fact”, the “objective” morality of that individual, is the entire sum of the process, the entire bundle + reviewing by conscious mind on each individual process + what the conscious mind would want to fix in order to be even more moral by the morals of the same bundle of synapses (i.e. self-reflectivity). The exact “objective morality” of each human is a complicated thing that I’m not even sure I grasp entirely and can describe adequately, but I’m quite certain that it is not limited to intuitions and that those intuitions are not entirely accurate.
The “objective moral fact” (to use your words), in this toy problem, is that IF AND ONLY IF A is correct when A thinks it is moral for A’s morality system to kill B, and B is correct when B thinks it is moral for B’s system to not be killed by A, then and only then it is moral for A to kill B and it is moral for B to not be killed by A. There is no contradictions, the universe is just fucked up and lets shit like this happen.
What? No. First, that’s called ethics, the thing about how individuals should interact. The reason ethics is hard is because each individual has a slightly different morality, but the reason it’s feasible at all is because most humans are fairly similar even in this.
Most humans, when faced with the toy problem of saving ten young lives versus three old ones, will save the ten young. Most humans, when they see a child get horribly mutilated or have their flesh melt off of their bones, will be revolted and feel that this is many kinds of Very Wrong.
For most humans, if they have a small something they value a little bit, but that if they give it up temporarily they know they can make another human’s morality become much much better, while if they stick to keeping their small something to themselves that human will feel horribly wronged, will give up that little bit for the benefit of the other human’s morality.
This seems to indicate that most humans have a component, somewhere in this bundle of synapses, that tries to estimate what the other bundles of synapses in other brains are doing, so as to not upset them too much. This is also part of what helps ethics be feasible at all.
I don’t even understand what you’re getting at. I’m not trying to come up with a system of norms that tells everyone what they should do to interact with other humans. How is it too subjective to be useful?
I’ve merely presented my current conclusions, the current highest-probability results of computing together all the evidence available to me. These are guesses and tentative assessments of reality, an attempt at approximating and describing what actually goes on out there in human brains that gives rise to humans talking about morality and not wanting to coat children with burning napalm. (sorry if this strikes political chords, I can’t think of a better example of something public-knowledge that the vast majority of humans who learned about it described as clearly wrong)
As for being “too subjective to do anything useful”… what? If I tell you that two cars have different engines, so you can’t use the exact same mathematical formula for calculating their velocity and traveled distance as they accelerate, is this useless subjective information? Because what I’m saying is that humans have different engines in terms of morality, and while like the car engines they have major similarities in the logical principles involved and how they operate, there are key differences that must be taken into consideration to produce any useful discussion about the velocities and positions of each car.
Apologies for being unclear. Opinions are not what is objectively moral, I was saying that the bundle of synapses I described above is both the main part of what is objectively moral (well, the algorithms implemented by the synapses anyway), and what comes out of the bundle of synapses is also what generates the opinions. They are correlated, but not perfectly so, let alone equivalent/equal.
So more often than not, one’s opinion that it is wrong to suddenly start killing and pillaging everyone in the nearest city is a correct assessment about their own morality. On average, most clear-cut moral judgments will be fairly accurate, because they come out of the same algorithms in different manners.
The latter two sentences of this last quote seem to aptly rephrase exactly what I was trying to say. The are objective algorithms and mechanisms in the bundles of nerves, but just because the conscious mind is getting a rough idea of what it thinks they might be doing after having a “KILLING CHILD IS WRONG” output a hundred times, the output still doesn’t have access to the whole thing, and even if it did there are things one would want to correct in order to avoid errors due to bias.
I can’t really be more precise or confident in exactly what is morality in a human’s brain, because I haven’t won five nobels in breakthrough neurobiology, philosophy, peace, ethics and psychology. I think that’s about the minimum award that would go to someone who had entirely solved and located exactly everything that makes humans moral and exactly how it works.
The ambiguity is appropriate, though unintentional. The first response is “we” individually, but to some extent there are many things that all humans find moral, and many more things that most humans find moral. Again the example of napalm-flavored youngsters.
So each of us has a separate algorithm, but if you were to examine them all individually, you could probably (with enough effort and smarts) come up with an algorithm that finds moral only what all humans find moral, or finds moral whatever at least 60% of humans find moral, or some other filtering or approximation.
To example you, “2x − 6” will return a positive number as long as x > 3 (let’s not count zero). Similarly, “3x − 3“ will return positive as long as x > 1. If positive numbers represent a “This is moral and good” output, then clearly they’re not the same morality. However, “x > 3” will guarantee a space of solutions that both moralities find moral and favorable.
(two-part comment, see above or below for the rest)
That’s still not the point. The entire bundle still isn’t Objective Morality, because the entire bundle is still insie one person’s head. Objective morality is what all ideal agents would converge on.
The way you have expressed this is contradiictory. You said “it is moral”, simpliciter, rather than, it is moral-for-A, but immora-for-B. Although to do that would have made ii obvious you are talking about subjective morality. And no, it isn’t the universes fault fault. The universe allows agents to have contradictory and incompatible impulses, but it is you choice to call those implulses “moral” despite the fact that they don’t resole conflicts, or take others’ interestes into account. I wouldn’t call them that. I think the contraiction means at least one of the agent’s I-think-this-is-moral beliefs is wrong
I don’t think so
Ethics ” Moral principles that govern a person’s or group’s behavior.” “1. ( used with a singular or plural verb ) a system of moral principles: the ethics of a culture. 2. the rules of conduct recognized in respect to a particular class of human actions or a particular group, culture, etc.: medical ethics; Christian ethics. 3. moral principles, as of an individual: His ethics forbade betrayal of a confidence. 4. ( usually used with a singular verb ) that branch of philosophy dealing with values relating to human conduct, with respect to the rightness and wrongness of certain actions and to the goodness and badness of the motives and ends of such actions. ”
Then what are you doing? The observation that facts about brains a relevant to descriptive ethics is rather obvious.
If you allow indiiviudal drivers to choose which side of the road to drive on, you have a uselessly subjective system of traffic law.
Their own something. I don’t think you are going to convince an error theorist that morality exists by showing them brain scans. And the terms “consicience” and “superego” cover internal regulation of behaviour without prejudice to the philosophical issues.
Has no bearing on the philosophy, again. All you have their is the intersection of a set of tablets.
Okay. That is clearly a word problem, and you are arguing my definition.
You assumed I was being deliberately sophistic and creating confusion on purpose. After I explicitly requested twice that things be interpreted the other way around where possible. I thought that it was very clear from context that what I meant was that:
IFF It is moral-A that A kills B
&& It is moral-B that B is not killed by A
&& There are no other factors influencing moral-A or moral-B
THEN:
It is moral for A that A kills B and it is likewise moral for B to not be killed by A. Let the fight begin.
Really? You’re going there?
Please stop this. I’m seeing more and more evidence that you’re deliberately ignoring my arguments and what I’m trying to say, and that you’re just equating everything I say with “This is not a perfect system of normative ethics, therefore it is worthless”.
I have a hard time even inferring what you mean by this rather irrelevant-seeming metaphor. I’m not talking about laws and saying “The law should only punish those that act against their intuitions of morality, oh derp!”—I’m not even talking about justice or legal systems or ideal societies at all! Have I somewhere accidentally made the claim that we should just let every single human build their own model of their own system of morality with incomplete information and let chaos ensue?
Yes. And in case that wasn’t painfully obvious yet, this “something” of their own is exactly what I mean to say when I use the word “morality”!
I’m not attempting to convince anyone that “morality” “exists”. To engage further on this point I would necessitate those two to be tabooed, because I honestly have no idea what you’re getting at or what you even mean by that sentence or the one after it.
Yup. If I agree to use your words, then yes. There’s an intersection of a set of tablets. These tablets give us some slightly iffy commandments that even the owner of the tablet would want to fix. The counterfactual edited version of the tablet after the owner has made the fixes, checked again to see if they want to fix anything, and are happy with the result, is exactly what I am pointing at here. I’ve used the words “objective morality” and “true moral preferences” and “moral algorithms” before, and all of those were pointing exactly at this. Yes, I claim that there’s nothing else here, move along.
If you want to have something more, some Objective Morality (in the sense you seem to be using that term) from somewhere else, humans are going to have to invent it. And either it’s going to be based on an intersection of edited tablets, or a lot of people are going to be really unhappy.
I can see that it is a word problem, and I woud argue that anyone would be hard pressed to guess what you meant by “objective moral facts”.
What fight? You have added the “for A” and “for B” clauses that were missing last time. Are you hilding me to blame for taking you at your word?
You claimed a distinction in meaning between “morality” and “ethics” that doesn’t exist. Pointing that out is useful for clarity of communication. It was not intended to prove anything at the object level.
I don’t know how accidental it was , but your “moral for A” and “moral for B” comment does suggest that two people can in contradiciton and yet both right.
I am totally aware of that. But you don’t get to call anything by any word. I was challenging the appriopriateness of making substantive claims based on a naming ceremony.
You said there were objective facts about it!
You haven’t explained that or how or why different individuals would converge on a single objective reality by refining their intuitions. And no, EY doesn’t either.
if they haven’t already.
So values and intuitions are a necessary ingredient. Any number of others could be as well.
If individual moralities have enough of a common component that we can point to principles and values that are widely-shared among living people and societies, that would certainly count as a “fact” about morality, which we could call a “moral fact”. And that fact is certainly “objective” from the POV of any single individual, although it’s not objective at all in the naïve Western sense of “objectivity” or God’s Eye View.
Dictionary definitions are worthless, especially in specialized domains. Does a distinction between “morality” and “ethics” (or even between “descriptive morality” and “normative morality”, if you’re committed to hopelessly confused and biased naming choices by academic philosophers) cut reality at its joints? I maintain that it does.
And it is stll not an objective moral fact in the sense of Moral Objectivism, in the sense of a first-order fact that makes some moral propositions mind independently true. It’s a second order fact.
I’ve never seen that distinction in the specialised domain in question.
I don’t think that’s a coincidence. Whether there is some kind of factual (e.g. biological) base for morality is an interesting question, but it’s generally a question for psychology and science, not philosophy. People who try to argue for such a factual basis in a naïve way usually end up talking about something very different than what we actually mean by “morality” in the real world. For an unusually clear example, see Ayn Rand’s moral theory, incidentally also called “Objectivism”.
Just got bashed several times, while presenting the fragility of values idea in Oxford, for using the term “descriptive morality”. I was almost certain Eliezer used the term, hence, I was blaming him for my bashing. But it seems he doesn’t, and the above comment is the solely instance of the term I could find. I’m blaming you them! Not really though, it seems I’ve invented this term on my own—and I’m not proud of it. So far, I’ve failed to find a correlated term either in meta-ethics or in the Sequences. In my head, I was using it to mean what would be the 0 step for CEV. It could be seen as the object of study of descriptive ethics (a term that does exist), but it seems descriptive ethics uses a pluralistic or relativistic view, while I needed a term to describe the morality shared by all humans.
So it’s even worse than I thought? When ethicists do any “descriptive” research, they are studying morality, whether they care to admit it or not. The problem with calling such things “ethics” is not so much that it implies a pluralist/relativist view—if anything, it makes the very opposite mistake: it does not take moralities seriously enough, as they exist in the real world. In common usage, the term “ethics” is only appropriate for very broadly-shared values (of course, whether such values exist after all is an empirical question), or else for the kind of consensus-based interplay of values or dispute resolution that we all do when we engage in ethical (or even moral!) reasoning in the real world.
Sooo, not objective then. Definition debates are stupid, but there is no reason at all to be this loose with language. Seriously, this reads like a deconstructionist critique of a novel from an undergraduate majoring in English. Complete with scare quotes around words that are actually terms of art.
Well, yes. I’m using scare quotes around the terms “objective” and “fact”, precisely to point out that I am using them in a more general way than the term of art is usually defined. Nonetheless, I think this is useful, since it may help dissolve some philosophical questions and perhaps show them to be ill-posed or misleading.
Needless to say, I do not think this is “being loose with language”. And yes, sometimes I adopt a distinctive writing style in order to make a point as clearly as possible.