I’ll ignore several other things I disagree with, or that are wrong, and concentrate on what I view as the big issue, because it’s really big.
Now, imagine the limit of your moral views as the amount of information you have approaches perfect information, and also your intelligence approaches the perfect rational Bayesian. I contend that this limit exists, and this is what I would refer to as the ideal morality.
Note: this is the limit of my personal morals. My limit would not be the same as your limit, let alone a nonhuman’s limit.
Aliens could not discover it by studying physics. It “exists”, but only in the sense that Aleph 1 exists
So aliens could discover it by studying mathematics, like a logical truth? Would they have any reason to treat it as a moral imperative? How does a logical fact or mathematical theorem become a moral imperative?
If I don’t like what it says, that’s by definition either because I am misinformed or stupid, so I would not wish to ignore it and stick with my own views
You gave that definition yourself. Then you assume without proof that those ideal morals exist and have the properties you describe. Then you claim, again without proof or even argument (beyond your definition), that they really are the best or idealized morals, for all humans at least, and describe universal moral obligations.
You can’t just give an arbitrary definition and transform it into a moral claim without any actual argument. How is that different from me saying: I define X-Morals as “the morals achiever by all sufficiently well informed and smart humans, which require they must greet each person they meet by hugging. If you don’t like this requirement, it’s by definition because you’re misinformed or stupid.
I also think that the limit is the same for every person, for a combination of two reasons. First, as Eliezer has said, two perfect Baysians given the same information must reach the same conclusion.
The same conclusion about facts they have information about: like physical facts, or logical theorems. But nobody has “information about morals”. Morals are just a kind of preferences. You can only have information about some particular person’s morals, not morals in themselves. So perfect Bayesians will agree about what my morals are and about what your morals are, but that doesn’t mean your and my morals are the same. Your argument is circular.
This is where I bring in the difference between morality and preference. I basically define morality as being being about what’s best for everyone in general, as opposed to preference which is what’s best for yourself.
Well, first of all, that’s not how everyone else uses the word morals. Normally we would say that your morals are to do what’s best for everyone; while my morals are something else. Calling your personal morals “simply morals”, is equivalent to saying that my (different) morals shouldn’t be called by the name morals or even “Daniel’s morals”, which is simply wrong.
As for your definition of (your) morals: you describe, roughly, utilitarianism. But people argue forever over brands of utilitarianism: average utilitarianism vs. total utilitarianism, different handling of utility monsters, different handling of “zero utility”, different necessarily arbitrary weighing of whose preferences are considered (do we satisfy paperclippers?), and so on. Experimentally, people are uncomfortable with any single concrete version (they have “repugnant conclusions”). And even if you have a version that you personally are satisfied with, that is not yet an argument for others to accept it in place of other versions (and of non-utilitarian approaches).
We obviously have a different view on the subjectivity of morals, no doubt an argument that’s been had many times before. The sequences claim to have resolved it or something, but in such a way that we both still seem to see our views as consistent with them.
To me, subjective morals like you talk about clearly exist, but I don’t see them as interesting in their own right. They’re just preferences people have about other people’s business. Interesting for the reasons any preference is interesting but no different.
The fundamental requirement for objective morals is simply that one (potential future) state of the world can be objectively better or worse than another. What constitutes “better” and “worse” being an important and difficult question of course, but still an objective one. I would call the negation, the idea that every possible state of the world is equally as good as any other, moral nihilism.
I accept that it’s used for the subjective type as well, but personally I save the use of the word “moral” for the objective type. The actual pursuit of a better state of the world irrespective our own personal preferences. I see objectivity as what separates morals from preferences in the first place—the core of taking a moral action is that your purpose is the good of others, or more generally the world around you, rather than yourself. I don’t agree that people having moral debates are simply comparing their subjective views (which sounds to me like “Gosh, you like fish? I like fish too!”), they’re arguing because they think there is actually an objective answer to which of them is right and they want to find out who it is (well, actually usually they just want to show everyone that it’s them, but you know what I mean).
This whole argument is actually off topic though. I think the point where things went wrong is where I answered the wrong question (though in my defence it was the one you asked). You asked how I determine what the number N is, but I never really even claimed to be able to do that in the first place. What I think you really wanted to know is how I define it. So I’ll give you that. This isn’t quite the perfect definition but it’s a start.
Imagine you’re outside space and time, and can see two worlds. One in which J is left alone, the other in which they’re eradicated. Now, imagine you’re going to chose one of these worlds in which you’ll live the life of a then randomly-chosen person. Once you make the decision, your current preferences, personality, and so on will cease to exist and you’ll just become that new random person. So, the question then becomes “Which world would you chose?”. Or more to the point “For what value of N would you decide it’s worth the risk of being eradicated as a J for the much higher chance of being a slightly happier N?”.
The one that’s “better” is the one that you would choose. Actually, more specifically it’s the one that’s the correct choice to make. I’d argue this correctness is objective, since the consequences of your choice are completely independent of anything about you. Note that although the connection to my view on morality is probably pretty clear, this definition doesn’t use the word “moral” anywhere. The main post posits an objective question of which is better, and this is simply my attempt to give a reasonable definition of what they’re asking.
I’ll ignore several other things I disagree with, or that are wrong, and concentrate on what I view as the big issue, because it’s really big.
Note: this is the limit of my personal morals. My limit would not be the same as your limit, let alone a nonhuman’s limit.
So aliens could discover it by studying mathematics, like a logical truth? Would they have any reason to treat it as a moral imperative? How does a logical fact or mathematical theorem become a moral imperative?
You gave that definition yourself. Then you assume without proof that those ideal morals exist and have the properties you describe. Then you claim, again without proof or even argument (beyond your definition), that they really are the best or idealized morals, for all humans at least, and describe universal moral obligations.
You can’t just give an arbitrary definition and transform it into a moral claim without any actual argument. How is that different from me saying: I define X-Morals as “the morals achiever by all sufficiently well informed and smart humans, which require they must greet each person they meet by hugging. If you don’t like this requirement, it’s by definition because you’re misinformed or stupid.
The same conclusion about facts they have information about: like physical facts, or logical theorems. But nobody has “information about morals”. Morals are just a kind of preferences. You can only have information about some particular person’s morals, not morals in themselves. So perfect Bayesians will agree about what my morals are and about what your morals are, but that doesn’t mean your and my morals are the same. Your argument is circular.
Well, first of all, that’s not how everyone else uses the word morals. Normally we would say that your morals are to do what’s best for everyone; while my morals are something else. Calling your personal morals “simply morals”, is equivalent to saying that my (different) morals shouldn’t be called by the name morals or even “Daniel’s morals”, which is simply wrong.
As for your definition of (your) morals: you describe, roughly, utilitarianism. But people argue forever over brands of utilitarianism: average utilitarianism vs. total utilitarianism, different handling of utility monsters, different handling of “zero utility”, different necessarily arbitrary weighing of whose preferences are considered (do we satisfy paperclippers?), and so on. Experimentally, people are uncomfortable with any single concrete version (they have “repugnant conclusions”). And even if you have a version that you personally are satisfied with, that is not yet an argument for others to accept it in place of other versions (and of non-utilitarian approaches).
We obviously have a different view on the subjectivity of morals, no doubt an argument that’s been had many times before. The sequences claim to have resolved it or something, but in such a way that we both still seem to see our views as consistent with them.
To me, subjective morals like you talk about clearly exist, but I don’t see them as interesting in their own right. They’re just preferences people have about other people’s business. Interesting for the reasons any preference is interesting but no different.
The fundamental requirement for objective morals is simply that one (potential future) state of the world can be objectively better or worse than another. What constitutes “better” and “worse” being an important and difficult question of course, but still an objective one. I would call the negation, the idea that every possible state of the world is equally as good as any other, moral nihilism.
I accept that it’s used for the subjective type as well, but personally I save the use of the word “moral” for the objective type. The actual pursuit of a better state of the world irrespective our own personal preferences. I see objectivity as what separates morals from preferences in the first place—the core of taking a moral action is that your purpose is the good of others, or more generally the world around you, rather than yourself. I don’t agree that people having moral debates are simply comparing their subjective views (which sounds to me like “Gosh, you like fish? I like fish too!”), they’re arguing because they think there is actually an objective answer to which of them is right and they want to find out who it is (well, actually usually they just want to show everyone that it’s them, but you know what I mean).
This whole argument is actually off topic though. I think the point where things went wrong is where I answered the wrong question (though in my defence it was the one you asked). You asked how I determine what the number N is, but I never really even claimed to be able to do that in the first place. What I think you really wanted to know is how I define it. So I’ll give you that. This isn’t quite the perfect definition but it’s a start.
Imagine you’re outside space and time, and can see two worlds. One in which J is left alone, the other in which they’re eradicated. Now, imagine you’re going to chose one of these worlds in which you’ll live the life of a then randomly-chosen person. Once you make the decision, your current preferences, personality, and so on will cease to exist and you’ll just become that new random person. So, the question then becomes “Which world would you chose?”. Or more to the point “For what value of N would you decide it’s worth the risk of being eradicated as a J for the much higher chance of being a slightly happier N?”.
The one that’s “better” is the one that you would choose. Actually, more specifically it’s the one that’s the correct choice to make. I’d argue this correctness is objective, since the consequences of your choice are completely independent of anything about you. Note that although the connection to my view on morality is probably pretty clear, this definition doesn’t use the word “moral” anywhere. The main post posits an objective question of which is better, and this is simply my attempt to give a reasonable definition of what they’re asking.