First, I’m going to call them ‘N’ and ‘J’, because I just don’t like the idea of this comment being taken out of context and appearing to refer to the real things.
Does there exist a relative proportion of N to J where extermination is superior to the status quo, under your assumptions? In theory yes. In reality, it’s so big that you run into a number of practical problems first. I’m going to run through as many places where this falls down in practice as I can, even if others have mentioned some.
The assumption that if you leave J fixed and increase N, that the level of annoyance per person in N stays constant. Exactly how annoyed can you be by someone you’ve never even met? Once N becomes large enough, you haven’t met one, your friends haven’t met one, no one you know has met one, and how do you really know whether they actually exist or not? As the size of N increases, in practice the average cost of J decreases, and it could well hit a bound. You could probably construct things in a way where that wouldn’t happen, but it’s at least not a straightforward matter of just blindly increasing N.
It’s a false dichotomy. Even given the assumptions you state, there’s all manner of other solutions to the problem than extermination. The existance and likely superiority of these solutions is part of our dislike of the proposal.
The assumption that they’re unable to change their opinion is unrealistic.
The assumption that they hate one particular group but don’t then just go on to hate another group when the first one is gone is unrealistic.
The whole analogy is horribly misleading because of all the associations that it brings in. Pretty much all of the assumptions required to make the theoretical situation your constructing actually work do not hold for the example you give.
With this much disparity between the theoretical situation and reality, it’s no surprise there’s an emotional conflict.
Does there exist a relative proportion of N to J where extermination is superior to the status quo, under your assumptions? In theory yes. In reality, it’s so big that you run into a number of practical problems first
How do you actually define the correct proportion, and measure the relevant parameters?
The funny thing is the point of my post was the long explanation of practical problems, yet both replies have asked about the “in theory yes” part. The point of those three words was to point out that the statements followed are despite my own position in the torture/dust specks issue.
As far as your questions go, I along with, I expect, the rest of the population of planet Earth have close to absolutely no idea. Logically deriving the theoretical existance of something does not automatically imbue you with the skills to calculate its precise location.
My only opinion is that the number is significantly more than the “billions of N and handful of J” mentioned in the post, indeed more than will ever occur in practice, and substantially less than 3^^^^^3.
How do you determine your likelihood that the number is significantly more than billions vs. a handful—say, today’s population of Earth against one person? If you have “close to absolutely no idea” of the precise value, there must be something you do know to make you think it’s more than a billion to one and less than 3^^^^^3 to one.
This is a leading question: your position (that you don’t know what the value is, but you believe there is a value) is dangerously close to moral realism...
So, I went and checked the definition of “moral realism” to understand why the term “dangerously” would be applied to the idea of being close to supporting it, and failed to find enlightenment. It seems to just mean that there’s a correct answer to moral questions, and I can’t understand why you would be here arguing about a moral question in the first place if you thought there was no answer. The sequence post The Meaning of Right seems to say “capable of being true” is a desirable and actual property of metaethics. So I’m no closer to understanding where you’re going with this than before.
As to how I determined that opinion, I imagined the overall negative effects of being exterminated or sent to a concentration camp, imagined the fleeting sense of happiness in knowing someone I hate is suffering pain, and then did the moral equivalent of estimating how many grains of rice one could pile up on a football field (i.e. made a guess). This is just my current best algorithm though, I make no claims of it being the ultimate moral test process.
I hope you can understand that I don’t claim to have no idea about morality in general, just about the exact number of grains of rice on a football field. Especially since I don’t know the size of the grains of rice or the code of football either.
[True ethical propositions] are made true by objective features of the world, independent of subjective opinion.
Moral realists have spilled oceans of ink justifying that claim. One common argument invents new meanings for the word “true” (“it’s not true the way physical fact, or inductive physical law, or mathematical theorems are true, but it’s still true! How do you know there aren’t more kinds of truth-ness in the world?”) They commit, in my experience, a multitude of sins—of epistemology, rationality, and discourse.
I asked myself: why do some people even talk about moral realism? What brings this idea to their minds in the first place? As far as I can see, this is due to introspection (the way their moral intuitions feel to them), rather than inspection of the external world (in which the objective morals are alleged to exist). Materialistically, this approach is suspect. An alien philosopher with different, or no, moral intuitions would not come up with the idea of an objective ethics no matter how much they investigated physics or logic. (This is, of course, not conclusive evidence on its own that moral realism is wrong. The conclusive evidence is that there is no good argument for it. This merely explains why people spend time talking about it.)
Apart from being wrong, I called moral realism dangerous because—in my personal experience—it is correlated with motivated, irrational arguments. And also because it is associated with multiple ways of using words contrary to their normal meaning, sometimes without making this clear to all participants in a conversation.
As for Eliezer, his metaethics certainly doesn’t support moral realism (under the above definition). A major point of that sequence is exactly that there is no purely objective ethics that is independent of the ethical actor. In his words, there is no universal argument that would convince “even a ghost of perfect emptiness”.
However, he apparently wishes to reclaim the word “right” or “true” and be able to say that his ethics are “right”. So he presents an argument that these words, as already used, naturally apply to his ethics, even though they are not better than a paperclipper’s ethics in an “objective” sense. The argument is not wrong on its own terms, but I think the goal is wrong: being able to say our ethics are “right” or “true” or “correct” only serves to confuse the debate. (On this point many disagree with me.)
I write all this to make sure there is no misunderstanding over the terms used—as there had been in some previous discussions I took part in.
I can’t understand why you would be here arguing about a moral question in the first place if you thought there was no answer.
Certainly there are answers to moral questions. However, they are the answers we give. An alien might give different answers. We don’t care morally that it would, because these are our morals, even if others disagree.
Debate about moral questions relies on the facts that 1) humans share many (most?) moral intuitions and conclusions, and some moral heuristics appear almost universal regardless of culture; and 2) within that framework, humans can sometimes convince one another to change their moral positions, especially when the new moral stand is that of a whole movement or society.
Those are not facts about some objective, independently existing morals. They are facts about human behavior.
As to how I determined that opinion, I imagined the overall negative effects of being exterminated or sent to a concentration camp, imagined the fleeting sense of happiness in knowing someone I hate is suffering pain, and then did the moral equivalent of estimating how many grains of rice one could pile up on a football field (i.e. made a guess). This is just my current best algorithm though, I make no claims of it being the ultimate moral test process.
You start the way we all do—by relying on personal moral intuition. But then you say there exists, or may exist, an “ultimate moral test process”. Is that supposed to be something independent of yourself? Or does it just represent the way your moral intuitions may/will evolve in the future?
Well, this seems to be a bigger debate than I thought I was getting into. It’s tangential to any point I was actually trying to make, but it’s interesting enough that I’ll bite.
I’ll try and give you a description of my point of view so that you can target it directly, as nothing you’ve given me so far has really put much of a dent in it. So far I just feel like I’m suffering from guilt by association—there’s people out there saying “morality is defined as God’s will”, and as soon as I suggest it’s anything other than some correlated preferences I fall in their camp.
Consider first the moral views that you have. Now imagine you had more information, and had heard some good arguments. In general your moral views would “improve” (give or take the chance of specifically misrepresentative information or persuasive false arguments, which in the long run should eventually be cancelled out by more information and arguments). Imagine also that you’re smarter, again in general your moral views should improve. You should prefer moral views that a smarter, better informed version of yourself would have to your current views.
Now, imagine the limit of your moral views as the amount of information you have approaches perfect information, and also your intelligence approaches the perfect rational Bayesian. I contend that this limit exists, and this is what I would refer to as the ideal morality. This “existance” is not the same as being somehow “woven into the fabric of the univers”. Aliens could not discover it by studying physics. It “exists”, but only in the sense that Aleph 1 exists or “the largest number ever to be uniquely described by a non-potentially-self-referential statement” exists. If I don’t like what it says, that’s by definition either because I am misinformed or stupid, so I would not wish to ignore it and stick with my own views (I’m referring here to one of Eliezer’s criticisms of moral realism).
So, if I bravely assume you accept that this limit exists, I can imagine you might claim that it’s still subjective, in that it’s the limit of an individual person’s views as their information and intelligence approach perfection. However, I also think that the limit is the same for every person, for a combination of two reasons. First, as Eliezer has said, two perfect Baysians given the same information must reach the same conclusion. As such, the only thing left to break the symmetry between two different perfectly intelligent and completely informed beings is the simple fact of them being different people. This is where I bring in the difference between morality and preference. I basically define morality as being being about what’s best for everyone in general, as opposed to preference which is what’s best for yourself. Which person in the universe happens to be you should simply not be an input to morality. So, this limit is the same rational process, the same information, and not a function of which person you are, therefore it must be the same for everyone.
Now at least you have a concrete argument to shoot at rather than some statements suggesting I fall into a particular bucket.
I’ll ignore several other things I disagree with, or that are wrong, and concentrate on what I view as the big issue, because it’s really big.
Now, imagine the limit of your moral views as the amount of information you have approaches perfect information, and also your intelligence approaches the perfect rational Bayesian. I contend that this limit exists, and this is what I would refer to as the ideal morality.
Note: this is the limit of my personal morals. My limit would not be the same as your limit, let alone a nonhuman’s limit.
Aliens could not discover it by studying physics. It “exists”, but only in the sense that Aleph 1 exists
So aliens could discover it by studying mathematics, like a logical truth? Would they have any reason to treat it as a moral imperative? How does a logical fact or mathematical theorem become a moral imperative?
If I don’t like what it says, that’s by definition either because I am misinformed or stupid, so I would not wish to ignore it and stick with my own views
You gave that definition yourself. Then you assume without proof that those ideal morals exist and have the properties you describe. Then you claim, again without proof or even argument (beyond your definition), that they really are the best or idealized morals, for all humans at least, and describe universal moral obligations.
You can’t just give an arbitrary definition and transform it into a moral claim without any actual argument. How is that different from me saying: I define X-Morals as “the morals achiever by all sufficiently well informed and smart humans, which require they must greet each person they meet by hugging. If you don’t like this requirement, it’s by definition because you’re misinformed or stupid.
I also think that the limit is the same for every person, for a combination of two reasons. First, as Eliezer has said, two perfect Baysians given the same information must reach the same conclusion.
The same conclusion about facts they have information about: like physical facts, or logical theorems. But nobody has “information about morals”. Morals are just a kind of preferences. You can only have information about some particular person’s morals, not morals in themselves. So perfect Bayesians will agree about what my morals are and about what your morals are, but that doesn’t mean your and my morals are the same. Your argument is circular.
This is where I bring in the difference between morality and preference. I basically define morality as being being about what’s best for everyone in general, as opposed to preference which is what’s best for yourself.
Well, first of all, that’s not how everyone else uses the word morals. Normally we would say that your morals are to do what’s best for everyone; while my morals are something else. Calling your personal morals “simply morals”, is equivalent to saying that my (different) morals shouldn’t be called by the name morals or even “Daniel’s morals”, which is simply wrong.
As for your definition of (your) morals: you describe, roughly, utilitarianism. But people argue forever over brands of utilitarianism: average utilitarianism vs. total utilitarianism, different handling of utility monsters, different handling of “zero utility”, different necessarily arbitrary weighing of whose preferences are considered (do we satisfy paperclippers?), and so on. Experimentally, people are uncomfortable with any single concrete version (they have “repugnant conclusions”). And even if you have a version that you personally are satisfied with, that is not yet an argument for others to accept it in place of other versions (and of non-utilitarian approaches).
We obviously have a different view on the subjectivity of morals, no doubt an argument that’s been had many times before. The sequences claim to have resolved it or something, but in such a way that we both still seem to see our views as consistent with them.
To me, subjective morals like you talk about clearly exist, but I don’t see them as interesting in their own right. They’re just preferences people have about other people’s business. Interesting for the reasons any preference is interesting but no different.
The fundamental requirement for objective morals is simply that one (potential future) state of the world can be objectively better or worse than another. What constitutes “better” and “worse” being an important and difficult question of course, but still an objective one. I would call the negation, the idea that every possible state of the world is equally as good as any other, moral nihilism.
I accept that it’s used for the subjective type as well, but personally I save the use of the word “moral” for the objective type. The actual pursuit of a better state of the world irrespective our own personal preferences. I see objectivity as what separates morals from preferences in the first place—the core of taking a moral action is that your purpose is the good of others, or more generally the world around you, rather than yourself. I don’t agree that people having moral debates are simply comparing their subjective views (which sounds to me like “Gosh, you like fish? I like fish too!”), they’re arguing because they think there is actually an objective answer to which of them is right and they want to find out who it is (well, actually usually they just want to show everyone that it’s them, but you know what I mean).
This whole argument is actually off topic though. I think the point where things went wrong is where I answered the wrong question (though in my defence it was the one you asked). You asked how I determine what the number N is, but I never really even claimed to be able to do that in the first place. What I think you really wanted to know is how I define it. So I’ll give you that. This isn’t quite the perfect definition but it’s a start.
Imagine you’re outside space and time, and can see two worlds. One in which J is left alone, the other in which they’re eradicated. Now, imagine you’re going to chose one of these worlds in which you’ll live the life of a then randomly-chosen person. Once you make the decision, your current preferences, personality, and so on will cease to exist and you’ll just become that new random person. So, the question then becomes “Which world would you chose?”. Or more to the point “For what value of N would you decide it’s worth the risk of being eradicated as a J for the much higher chance of being a slightly happier N?”.
The one that’s “better” is the one that you would choose. Actually, more specifically it’s the one that’s the correct choice to make. I’d argue this correctness is objective, since the consequences of your choice are completely independent of anything about you. Note that although the connection to my view on morality is probably pretty clear, this definition doesn’t use the word “moral” anywhere. The main post posits an objective question of which is better, and this is simply my attempt to give a reasonable definition of what they’re asking.
Does there exist a relative proportion of N to J where extermination is superior to the status quo, under your assumptions? In theory yes. In reality, it’s so big that you run into a number of practical problems first.
Real Ns would disagree.
They did realize that killing Js wasn’t exactly a nice thing to do. At first they considered relocating Js to some remote land (Madagascar, etc.). When it became apparent thar relocating millions while fighting a world war wasn’t feasible and they resolved to killing them, they had to invent death camps rather than just shooting them because even the SS had problems doing that.
Nevertheless, they had to free the Lebensraum to build the Empire that would Last for a Thousand Years, and if these Js were in the way, well, too bad for them.
I don’t see why utilitarianism should be held accountable for the actions of people who didn’t even particulalry subscribe to it.
They may have not framed the issue explicitely in terms of maximization of an aggregate utility function, but their behavior seems consistent with consequentialist moral reasoning.
Reversed stupidity is not intelligence. That utilitarianism is dangerous in the hands of someone with a poor value function is old news. The reasons why utilitarianism may be correct or not exist in an entirely unrelated argument space.
Ugh so obvious, except I only looked for the help in between making edits, looking for a global thing rather than the (more useful most of the time) local thing.
Why is that relevant? Real Ns weren’t good rationalists after all. If the existence of Js really made them suffer (which it most probably didn’t under any reasonable definition of “suffer”) but they realised that killing Js has negative utility, there were still plenty of superior solutions, e.g.: (1) relocating the Js afer the war (they really didn’t stand in the way), (2) giving all or most Js a new identity (you don’t recognise a J without digging into birth certificates or something; destroying these records and creating strong incentives for the Js to be silent about their origin would work fine), (3) simply stopping the anti-J propaganda which was the leading cause of hatred while being often pursued for reasons unrelated to Js, mostly to foster citizens loyalty to the party by creating an image of an evil enemy.
Of course Ns could have beliefs, and probably a lot of them had beliefs, which somehow excluded these solutions from consideration and therefore justified what they actually did on utilitarian grounds. (Although probably only a minority of Ns were utilitarians). But the original post wasn’t pointing out that utilitarianism could fail horribly when combined with false beliefs and biases. It was rather about the repugnant consequences of scope sensitivity and unbounded utility, even when no false beliefs are involved.
That clause was meant to exclude the possibility of claiming suffering whenever one’s preferences aren’t satisfied. As I have written ‘any reasonable’, I didn’t have one specific definition in mind.
First, I’m going to call them ‘N’ and ‘J’, because I just don’t like the idea of this comment being taken out of context and appearing to refer to the real things.
Does there exist a relative proportion of N to J where extermination is superior to the status quo, under your assumptions? In theory yes. In reality, it’s so big that you run into a number of practical problems first. I’m going to run through as many places where this falls down in practice as I can, even if others have mentioned some.
The assumption that if you leave J fixed and increase N, that the level of annoyance per person in N stays constant. Exactly how annoyed can you be by someone you’ve never even met? Once N becomes large enough, you haven’t met one, your friends haven’t met one, no one you know has met one, and how do you really know whether they actually exist or not? As the size of N increases, in practice the average cost of J decreases, and it could well hit a bound. You could probably construct things in a way where that wouldn’t happen, but it’s at least not a straightforward matter of just blindly increasing N.
It’s a false dichotomy. Even given the assumptions you state, there’s all manner of other solutions to the problem than extermination. The existance and likely superiority of these solutions is part of our dislike of the proposal.
The assumption that they’re unable to change their opinion is unrealistic.
The assumption that they hate one particular group but don’t then just go on to hate another group when the first one is gone is unrealistic.
The whole analogy is horribly misleading because of all the associations that it brings in. Pretty much all of the assumptions required to make the theoretical situation your constructing actually work do not hold for the example you give.
With this much disparity between the theoretical situation and reality, it’s no surprise there’s an emotional conflict.
How do you actually define the correct proportion, and measure the relevant parameters?
The funny thing is the point of my post was the long explanation of practical problems, yet both replies have asked about the “in theory yes” part. The point of those three words was to point out that the statements followed are despite my own position in the torture/dust specks issue.
As far as your questions go, I along with, I expect, the rest of the population of planet Earth have close to absolutely no idea. Logically deriving the theoretical existance of something does not automatically imbue you with the skills to calculate its precise location.
My only opinion is that the number is significantly more than the “billions of N and handful of J” mentioned in the post, indeed more than will ever occur in practice, and substantially less than 3^^^^^3.
How do you determine your likelihood that the number is significantly more than billions vs. a handful—say, today’s population of Earth against one person? If you have “close to absolutely no idea” of the precise value, there must be something you do know to make you think it’s more than a billion to one and less than 3^^^^^3 to one.
This is a leading question: your position (that you don’t know what the value is, but you believe there is a value) is dangerously close to moral realism...
So, I went and checked the definition of “moral realism” to understand why the term “dangerously” would be applied to the idea of being close to supporting it, and failed to find enlightenment. It seems to just mean that there’s a correct answer to moral questions, and I can’t understand why you would be here arguing about a moral question in the first place if you thought there was no answer. The sequence post The Meaning of Right seems to say “capable of being true” is a desirable and actual property of metaethics. So I’m no closer to understanding where you’re going with this than before.
As to how I determined that opinion, I imagined the overall negative effects of being exterminated or sent to a concentration camp, imagined the fleeting sense of happiness in knowing someone I hate is suffering pain, and then did the moral equivalent of estimating how many grains of rice one could pile up on a football field (i.e. made a guess). This is just my current best algorithm though, I make no claims of it being the ultimate moral test process.
I hope you can understand that I don’t claim to have no idea about morality in general, just about the exact number of grains of rice on a football field. Especially since I don’t know the size of the grains of rice or the code of football either.
Moral realism claims that:
Moral realists have spilled oceans of ink justifying that claim. One common argument invents new meanings for the word “true” (“it’s not true the way physical fact, or inductive physical law, or mathematical theorems are true, but it’s still true! How do you know there aren’t more kinds of truth-ness in the world?”) They commit, in my experience, a multitude of sins—of epistemology, rationality, and discourse.
I asked myself: why do some people even talk about moral realism? What brings this idea to their minds in the first place? As far as I can see, this is due to introspection (the way their moral intuitions feel to them), rather than inspection of the external world (in which the objective morals are alleged to exist). Materialistically, this approach is suspect. An alien philosopher with different, or no, moral intuitions would not come up with the idea of an objective ethics no matter how much they investigated physics or logic. (This is, of course, not conclusive evidence on its own that moral realism is wrong. The conclusive evidence is that there is no good argument for it. This merely explains why people spend time talking about it.)
Apart from being wrong, I called moral realism dangerous because—in my personal experience—it is correlated with motivated, irrational arguments. And also because it is associated with multiple ways of using words contrary to their normal meaning, sometimes without making this clear to all participants in a conversation.
As for Eliezer, his metaethics certainly doesn’t support moral realism (under the above definition). A major point of that sequence is exactly that there is no purely objective ethics that is independent of the ethical actor. In his words, there is no universal argument that would convince “even a ghost of perfect emptiness”.
However, he apparently wishes to reclaim the word “right” or “true” and be able to say that his ethics are “right”. So he presents an argument that these words, as already used, naturally apply to his ethics, even though they are not better than a paperclipper’s ethics in an “objective” sense. The argument is not wrong on its own terms, but I think the goal is wrong: being able to say our ethics are “right” or “true” or “correct” only serves to confuse the debate. (On this point many disagree with me.)
I write all this to make sure there is no misunderstanding over the terms used—as there had been in some previous discussions I took part in.
Certainly there are answers to moral questions. However, they are the answers we give. An alien might give different answers. We don’t care morally that it would, because these are our morals, even if others disagree.
Debate about moral questions relies on the facts that 1) humans share many (most?) moral intuitions and conclusions, and some moral heuristics appear almost universal regardless of culture; and 2) within that framework, humans can sometimes convince one another to change their moral positions, especially when the new moral stand is that of a whole movement or society.
Those are not facts about some objective, independently existing morals. They are facts about human behavior.
You start the way we all do—by relying on personal moral intuition. But then you say there exists, or may exist, an “ultimate moral test process”. Is that supposed to be something independent of yourself? Or does it just represent the way your moral intuitions may/will evolve in the future?
Well, this seems to be a bigger debate than I thought I was getting into. It’s tangential to any point I was actually trying to make, but it’s interesting enough that I’ll bite.
I’ll try and give you a description of my point of view so that you can target it directly, as nothing you’ve given me so far has really put much of a dent in it. So far I just feel like I’m suffering from guilt by association—there’s people out there saying “morality is defined as God’s will”, and as soon as I suggest it’s anything other than some correlated preferences I fall in their camp.
Consider first the moral views that you have. Now imagine you had more information, and had heard some good arguments. In general your moral views would “improve” (give or take the chance of specifically misrepresentative information or persuasive false arguments, which in the long run should eventually be cancelled out by more information and arguments). Imagine also that you’re smarter, again in general your moral views should improve. You should prefer moral views that a smarter, better informed version of yourself would have to your current views.
Now, imagine the limit of your moral views as the amount of information you have approaches perfect information, and also your intelligence approaches the perfect rational Bayesian. I contend that this limit exists, and this is what I would refer to as the ideal morality. This “existance” is not the same as being somehow “woven into the fabric of the univers”. Aliens could not discover it by studying physics. It “exists”, but only in the sense that Aleph 1 exists or “the largest number ever to be uniquely described by a non-potentially-self-referential statement” exists. If I don’t like what it says, that’s by definition either because I am misinformed or stupid, so I would not wish to ignore it and stick with my own views (I’m referring here to one of Eliezer’s criticisms of moral realism).
So, if I bravely assume you accept that this limit exists, I can imagine you might claim that it’s still subjective, in that it’s the limit of an individual person’s views as their information and intelligence approach perfection. However, I also think that the limit is the same for every person, for a combination of two reasons. First, as Eliezer has said, two perfect Baysians given the same information must reach the same conclusion. As such, the only thing left to break the symmetry between two different perfectly intelligent and completely informed beings is the simple fact of them being different people. This is where I bring in the difference between morality and preference. I basically define morality as being being about what’s best for everyone in general, as opposed to preference which is what’s best for yourself. Which person in the universe happens to be you should simply not be an input to morality. So, this limit is the same rational process, the same information, and not a function of which person you are, therefore it must be the same for everyone.
Now at least you have a concrete argument to shoot at rather than some statements suggesting I fall into a particular bucket.
I’ll ignore several other things I disagree with, or that are wrong, and concentrate on what I view as the big issue, because it’s really big.
Note: this is the limit of my personal morals. My limit would not be the same as your limit, let alone a nonhuman’s limit.
So aliens could discover it by studying mathematics, like a logical truth? Would they have any reason to treat it as a moral imperative? How does a logical fact or mathematical theorem become a moral imperative?
You gave that definition yourself. Then you assume without proof that those ideal morals exist and have the properties you describe. Then you claim, again without proof or even argument (beyond your definition), that they really are the best or idealized morals, for all humans at least, and describe universal moral obligations.
You can’t just give an arbitrary definition and transform it into a moral claim without any actual argument. How is that different from me saying: I define X-Morals as “the morals achiever by all sufficiently well informed and smart humans, which require they must greet each person they meet by hugging. If you don’t like this requirement, it’s by definition because you’re misinformed or stupid.
The same conclusion about facts they have information about: like physical facts, or logical theorems. But nobody has “information about morals”. Morals are just a kind of preferences. You can only have information about some particular person’s morals, not morals in themselves. So perfect Bayesians will agree about what my morals are and about what your morals are, but that doesn’t mean your and my morals are the same. Your argument is circular.
Well, first of all, that’s not how everyone else uses the word morals. Normally we would say that your morals are to do what’s best for everyone; while my morals are something else. Calling your personal morals “simply morals”, is equivalent to saying that my (different) morals shouldn’t be called by the name morals or even “Daniel’s morals”, which is simply wrong.
As for your definition of (your) morals: you describe, roughly, utilitarianism. But people argue forever over brands of utilitarianism: average utilitarianism vs. total utilitarianism, different handling of utility monsters, different handling of “zero utility”, different necessarily arbitrary weighing of whose preferences are considered (do we satisfy paperclippers?), and so on. Experimentally, people are uncomfortable with any single concrete version (they have “repugnant conclusions”). And even if you have a version that you personally are satisfied with, that is not yet an argument for others to accept it in place of other versions (and of non-utilitarian approaches).
We obviously have a different view on the subjectivity of morals, no doubt an argument that’s been had many times before. The sequences claim to have resolved it or something, but in such a way that we both still seem to see our views as consistent with them.
To me, subjective morals like you talk about clearly exist, but I don’t see them as interesting in their own right. They’re just preferences people have about other people’s business. Interesting for the reasons any preference is interesting but no different.
The fundamental requirement for objective morals is simply that one (potential future) state of the world can be objectively better or worse than another. What constitutes “better” and “worse” being an important and difficult question of course, but still an objective one. I would call the negation, the idea that every possible state of the world is equally as good as any other, moral nihilism.
I accept that it’s used for the subjective type as well, but personally I save the use of the word “moral” for the objective type. The actual pursuit of a better state of the world irrespective our own personal preferences. I see objectivity as what separates morals from preferences in the first place—the core of taking a moral action is that your purpose is the good of others, or more generally the world around you, rather than yourself. I don’t agree that people having moral debates are simply comparing their subjective views (which sounds to me like “Gosh, you like fish? I like fish too!”), they’re arguing because they think there is actually an objective answer to which of them is right and they want to find out who it is (well, actually usually they just want to show everyone that it’s them, but you know what I mean).
This whole argument is actually off topic though. I think the point where things went wrong is where I answered the wrong question (though in my defence it was the one you asked). You asked how I determine what the number N is, but I never really even claimed to be able to do that in the first place. What I think you really wanted to know is how I define it. So I’ll give you that. This isn’t quite the perfect definition but it’s a start.
Imagine you’re outside space and time, and can see two worlds. One in which J is left alone, the other in which they’re eradicated. Now, imagine you’re going to chose one of these worlds in which you’ll live the life of a then randomly-chosen person. Once you make the decision, your current preferences, personality, and so on will cease to exist and you’ll just become that new random person. So, the question then becomes “Which world would you chose?”. Or more to the point “For what value of N would you decide it’s worth the risk of being eradicated as a J for the much higher chance of being a slightly happier N?”.
The one that’s “better” is the one that you would choose. Actually, more specifically it’s the one that’s the correct choice to make. I’d argue this correctness is objective, since the consequences of your choice are completely independent of anything about you. Note that although the connection to my view on morality is probably pretty clear, this definition doesn’t use the word “moral” anywhere. The main post posits an objective question of which is better, and this is simply my attempt to give a reasonable definition of what they’re asking.
Real Ns would disagree.
They did realize that killing Js wasn’t exactly a nice thing to do. At first they considered relocating Js to some remote land (Madagascar, etc.). When it became apparent thar relocating millions while fighting a world war wasn’t feasible and they resolved to killing them, they had to invent death camps rather than just shooting them because even the SS had problems doing that.
Nevertheless, they had to free the Lebensraum to build the Empire that would Last for a Thousand Years, and if these Js were in the way, well, too bad for them.
Ends before the means: utilitarianism at work.
I don’t see why utilitarianism should be held accountable for the actions of people who didn’t even particulalry subscribe to it.
Also, why are you using N and J to talk about actual Nazis and Jews? That partly defeats the purpose of my making the distinction.
They may have not framed the issue explicitely in terms of maximization of an aggregate utility function, but their behavior seems consistent with consequentialist moral reasoning.
Reversed stupidity is not intelligence. That utilitarianism is dangerous in the hands of someone with a poor value function is old news. The reasons why utilitarianism may be correct or not exist in an entirely unrelated argument space.
click the “Show help” button below the comment box
Ugh so obvious, except I only looked for the help in between making edits, looking for a global thing rather than the (more useful most of the time) local thing.
Thanks!
Why is that relevant? Real Ns weren’t good rationalists after all. If the existence of Js really made them suffer (which it most probably didn’t under any reasonable definition of “suffer”) but they realised that killing Js has negative utility, there were still plenty of superior solutions, e.g.: (1) relocating the Js afer the war (they really didn’t stand in the way), (2) giving all or most Js a new identity (you don’t recognise a J without digging into birth certificates or something; destroying these records and creating strong incentives for the Js to be silent about their origin would work fine), (3) simply stopping the anti-J propaganda which was the leading cause of hatred while being often pursued for reasons unrelated to Js, mostly to foster citizens loyalty to the party by creating an image of an evil enemy.
Of course Ns could have beliefs, and probably a lot of them had beliefs, which somehow excluded these solutions from consideration and therefore justified what they actually did on utilitarian grounds. (Although probably only a minority of Ns were utilitarians). But the original post wasn’t pointing out that utilitarianism could fail horribly when combined with false beliefs and biases. It was rather about the repugnant consequences of scope sensitivity and unbounded utility, even when no false beliefs are involved.
What definition is that?
That clause was meant to exclude the possibility of claiming suffering whenever one’s preferences aren’t satisfied. As I have written ‘any reasonable’, I didn’t have one specific definition in mind.