Insofar as my own actions are atypical, I intend for it to result from atypical moral beliefs rather than atypical factual beliefs. (If you can think of instances of clearly atypical factual beliefs on my part, let me know.) Of course, you could claim, elite common sense should apply also as a prior to what my own moral beliefs actually are, given the fallibility of introspection. This is true, but its importance depends on how abstractly I view my own moral values. If I ask questions about what an extrapolated Brian would think upon learning more, having more experiences, etc., then the elite prior has a lot to say on this question. But if I’m more concerned with my very immediate emotional reaction, then there’s less room for error and less that the common-sense prior has to say. The fact that my moral values are sometimes not strongly affected by common-sense moral values comes from my favoring immediate emotions rather than what (one of many possible) extrapolated Brians would feel upon having further and different life experiences. (Of course, there are many possible further life experiences I could have, which would push me in lots of random directions. This is why I’m not so gung ho about what my extrapolated selves would think on some questions.)
As you point out, one choice point is how much idealization to introduce. At one extreme, you might introduce no idealization at all, so that whatever you presently approve of is what you’ll assume is right. On the other extreme you might have a great deal of idealization. You may assume that a better guide is what you would approve of if you knew much more, had experienced much more, were much more intelligent, made no cognitive errors in your reasoning, and had much more time to think. I lean in favor of the other extreme, as I believe most people who have considered this question do, though recognize that you want to specify your procedure in a way that leaves some core part of your values unchanged. Still, I think this is a choice that turns on many tricky cognitive steps, any of which could easily be taken in the wrong direction. So I would urge that insofar as you are making a very unusual decision at this step, you should try to very carefully understand the process that others are going through.
ETA: I’d also caution against just straight-out assuming a particular meta-ethical perspective. This is not a case where you are an expert in the sense of someone who elite common sense would defer to, and I don’t think your specific version of anti-realism, or your philosophical perspective which says there is no real question here, are views which can command the assent of a broad coalition of trustworthy people.
I don’t think your specific version of anti-realism, or your philosophical perspective which says there is no real question here, are views which can command the assent of a broad coalition of trustworthy people.
My current meta-ethical view says I care about factual but not necessarily moral disagreements with respect to elites. One’s choice of meta-ethics is itself a moral decision, not a factual one, so this disagreement doesn’t much concern me. Of course, there are some places where I could be factually wrong in my meta-ethics, like with the logical reasoning in this comment, but I think most elites don’t think there’s something wrong with my logic, just something (ethically) wrong with my moral stance. Let me know if you disagree with this. Even with moral realists, I’ve never heard someone argue that it’s a factual mistake not to care about moral truth (what could that even mean?), just that it would be a moral mistake or an error of reasonableness or something like that.
My current meta-ethical view says I care about factual but not necessarily moral disagreements with respect to elites. One’s choice of meta-ethics is itself a moral decision, not a factual one, so this disagreement doesn’t much concern me.
I’m a bit flabbergasted by the confidence with which you speak about this issue. In my opinion, the history of philosophy is filled with a lot of people often smarter than you and me going around saying that their perspective is the unique one that solves everything and that other people are incoherent and so on. As far as I can tell, you are another one of these people.
Like Luke Muehlhauser, I believe that we don’t even know what we’re asking when we ask ethical questions, and I suspect we don’t really know what we’re asking when we asking meta-ethical questions either. As far as I can tell, you’ve picked one possible candidate thing we could be asking—”what do I care about right now?”—among a broad class of possible questions, and then you are claiming that whatever you want right now is right because that’s what you’re asking.
Of course, there are some places where I could be factually wrong in my meta-ethics, like with the logical reasoning in this comment, but I think most elites don’t think there’s something wrong with my logic, just something (ethically) wrong with my moral stance. Let me know if you disagree with this.
I think most people would just think you had made an error somewhere and not be able to say where it was, and add that you were talking about completely murky issue that people aren’t good at thinking about.
I personally suspect your error lies in not considering the problem from perspectives other than “what does Brian Tomasik care about right now?”.
In my opinion, the history of philosophy is filled with a lot of people often smarter than you and me going around saying that their perspective is the unique one that solves everything and that other people are incoherent and so on.
I think it’s fair to say that concepts like libertarian free will and dualism in philosophy of mind are either incoherent or extremely implausible, though maybe the elite-common-sense prior would make us less certain of that than most on LessWrong seem to be.
Like Luke Muehlhauser, I believe that we don’t even know what we’re asking when we ask ethical questions
Yes, I think most of the confusion on this subject comes from disputing definitions. Luke says: “Within 20 seconds of arguing about the definition of ‘desire’, someone will say, ‘Screw it. Taboo ‘desire’ so we can argue about facts and anticipations, not definitions.’”
Here I would say, “Screw ethics and meta-ethics. All I’m saying is I want to do what I feel like doing, even if you and other elites don’t agree with it.”
I personally suspect your error lies in not considering the problem from perspectives other than “what does Brian Tomasik care about right now?”.
Sure, but this is not a factual error, just an error in being a reasonable person or something. :)
I should point out that “doing what I feel like doing” doesn’t necessarily mean running roughshod over other people’s values. I think it’s generally better to seek compromise and remain friendly to those with whom you want to cooperate. It’s just that this is an instrumental concession, not because I actually agree with the values that I’m willing to be nice to.
Here I would say, “Screw ethics and meta-ethics. All I’m saying is I want to do what I feel like doing, even if you and other elites don’t agree with it.”
I think that there is a genuine concern that many people have when they try to ask ethical questions and discuss them with others, and that this process can lead to doing better in terms of that concern. I am speaking vaguely because, as I said earlier, I don’t think that I or others really understand what is going on. This has been an important process for many of the people I know who are trying to make a large positive impact on the world. I believe it was part of the process for you as well. When you say “I want to do what I want to do” I think it mostly just serves as a conversation-stopper, rather than something that contributes to a valuable process of reflection and exchange of ideas.
I personally suspect your error lies in not considering the problem from perspectives other than “what does Brian Tomasik care about right now?”.
Sure, but this is not a factual error, just an error in being a reasonable person or something. :)
I think it is a missed opportunity to engage in a process of reflection and exchange of ideas that I don’t fully understand but seems to deliver valuable results.
When you say “I want to do what I want to do” I think it mostly just serves as a conversation-stopper, rather than something that contributes to a valuable process of reflection and exchange of ideas.
I’m not always as unreasonable as suggested there, but I was mainly trying to point out that if I refuse to go along with certain ideas, it’s not dependent on a controversial theory of meta-ethics. It’s just that I intuitively don’t like the ideas and so reject them out of hand. Most people do this with ideas they find too unintuitive to countenance.
On some questions, my emotions are too strong, and it feels like it would be bad to budge my current stance.
I think it is a missed opportunity to engage in a process of reflection and exchange of ideas that I don’t fully understand but seems to deliver valuable results.
Fair enough. :) I’ll buy that way of putting it.
Anyway, if I were really as unreasonable as it sounds, I wouldn’t be talking here and putting at risk the preservation of my current goals.
I’m not always as unreasonable as suggested there, but I was mainly trying to point out that if I refuse to go along with certain ideas, it’s not dependent on a controversial theory of meta-ethics. It’s just that I intuitively don’t like the ideas and so reject them out of hand. Most people do this with ideas they find too unintuitive to countenance.
Whether you want to call it a theory of meta-ethics or not, and whether it is a factual error or not, you have an unusual approach to dealing with moral questions that places an unusual amount of emphasis on Brian Tomasik’s present concerns. Maybe this is because there is something very different about you that justifies it, or maybe it is some idiosyncratic blind spot or bias of yours. I think you should put weight on both possibilities, and that this pushes in favor of more moderation in the face of values disagreements. Hope that helps articulate where I’m coming from in your language. This is hard to write and think about.
an unusual approach to dealing with moral questions
Why do you think it’s unusual? I would strongly suspect that the majority of people have never examined their moral beliefs carefully and so their moral responses are “intuitive”—they go by gut feeling, basically. I think that’s the normal mode in which most of humanity operates most of the time.
I think other people are significantly more responsive to values disagreements than Brian is, and that this suggests they are significantly more open to the possibility that their idiosyncratic personal values judgments are mistaken. You can get a sense of how unusual Brian’s perspectives are by examining his website, where his discussions of negative utilitarianism and insect suffering stand out.
I think other people are significantly more responsive to values disagreements
That’s a pretty meaningless statement without specifying which values. How responsive “other people” would be to value disagreements about child pornography, for example, do you think?
I suspect Nick would say that if there were respected elites who favored increasing the amount of child pornography, he would give some weight to the possibility that such a position was in fact something he would come to agree with upon further reflection.
Maybe this is because there is something very different about you that justifies it, or maybe it is some idiosyncratic blind spot or bias of yours.
Or, most likely of all, it’s because I don’t care to justify it. If you want to call “not wanting to justify a stance” a bias or blind spot, I’m ok with that.
Hope that helps articulate where I’m coming from in your language. This is hard to write and think about.
As you point out, one choice point is how much idealization to introduce. At one extreme, you might introduce no idealization at all, so that whatever you presently approve of is what you’ll assume is right. On the other extreme you might have a great deal of idealization. You may assume that a better guide is what you would approve of if you knew much more, had experienced much more, were much more intelligent, made no cognitive errors in your reasoning, and had much more time to think. I lean in favor of the other extreme, as I believe most people who have considered this question do, though recognize that you want to specify your procedure in a way that leaves some core part of your values unchanged. Still, I think this is a choice that turns on many tricky cognitive steps, any of which could easily be taken in the wrong direction. So I would urge that insofar as you are making a very unusual decision at this step, you should try to very carefully understand the process that others are going through.
ETA: I’d also caution against just straight-out assuming a particular meta-ethical perspective. This is not a case where you are an expert in the sense of someone who elite common sense would defer to, and I don’t think your specific version of anti-realism, or your philosophical perspective which says there is no real question here, are views which can command the assent of a broad coalition of trustworthy people.
My current meta-ethical view says I care about factual but not necessarily moral disagreements with respect to elites. One’s choice of meta-ethics is itself a moral decision, not a factual one, so this disagreement doesn’t much concern me. Of course, there are some places where I could be factually wrong in my meta-ethics, like with the logical reasoning in this comment, but I think most elites don’t think there’s something wrong with my logic, just something (ethically) wrong with my moral stance. Let me know if you disagree with this. Even with moral realists, I’ve never heard someone argue that it’s a factual mistake not to care about moral truth (what could that even mean?), just that it would be a moral mistake or an error of reasonableness or something like that.
I’m a bit flabbergasted by the confidence with which you speak about this issue. In my opinion, the history of philosophy is filled with a lot of people often smarter than you and me going around saying that their perspective is the unique one that solves everything and that other people are incoherent and so on. As far as I can tell, you are another one of these people.
Like Luke Muehlhauser, I believe that we don’t even know what we’re asking when we ask ethical questions, and I suspect we don’t really know what we’re asking when we asking meta-ethical questions either. As far as I can tell, you’ve picked one possible candidate thing we could be asking—”what do I care about right now?”—among a broad class of possible questions, and then you are claiming that whatever you want right now is right because that’s what you’re asking.
I think most people would just think you had made an error somewhere and not be able to say where it was, and add that you were talking about completely murky issue that people aren’t good at thinking about.
I personally suspect your error lies in not considering the problem from perspectives other than “what does Brian Tomasik care about right now?”.
[Edited to reduce rhetoric.]
I think it’s fair to say that concepts like libertarian free will and dualism in philosophy of mind are either incoherent or extremely implausible, though maybe the elite-common-sense prior would make us less certain of that than most on LessWrong seem to be.
Yes, I think most of the confusion on this subject comes from disputing definitions. Luke says: “Within 20 seconds of arguing about the definition of ‘desire’, someone will say, ‘Screw it. Taboo ‘desire’ so we can argue about facts and anticipations, not definitions.’”
Here I would say, “Screw ethics and meta-ethics. All I’m saying is I want to do what I feel like doing, even if you and other elites don’t agree with it.”
Sure, but this is not a factual error, just an error in being a reasonable person or something. :)
I should point out that “doing what I feel like doing” doesn’t necessarily mean running roughshod over other people’s values. I think it’s generally better to seek compromise and remain friendly to those with whom you want to cooperate. It’s just that this is an instrumental concession, not because I actually agree with the values that I’m willing to be nice to.
I think that there is a genuine concern that many people have when they try to ask ethical questions and discuss them with others, and that this process can lead to doing better in terms of that concern. I am speaking vaguely because, as I said earlier, I don’t think that I or others really understand what is going on. This has been an important process for many of the people I know who are trying to make a large positive impact on the world. I believe it was part of the process for you as well. When you say “I want to do what I want to do” I think it mostly just serves as a conversation-stopper, rather than something that contributes to a valuable process of reflection and exchange of ideas.
I think it is a missed opportunity to engage in a process of reflection and exchange of ideas that I don’t fully understand but seems to deliver valuable results.
I’m not always as unreasonable as suggested there, but I was mainly trying to point out that if I refuse to go along with certain ideas, it’s not dependent on a controversial theory of meta-ethics. It’s just that I intuitively don’t like the ideas and so reject them out of hand. Most people do this with ideas they find too unintuitive to countenance.
On some questions, my emotions are too strong, and it feels like it would be bad to budge my current stance.
Fair enough. :) I’ll buy that way of putting it.
Anyway, if I were really as unreasonable as it sounds, I wouldn’t be talking here and putting at risk the preservation of my current goals.
Whether you want to call it a theory of meta-ethics or not, and whether it is a factual error or not, you have an unusual approach to dealing with moral questions that places an unusual amount of emphasis on Brian Tomasik’s present concerns. Maybe this is because there is something very different about you that justifies it, or maybe it is some idiosyncratic blind spot or bias of yours. I think you should put weight on both possibilities, and that this pushes in favor of more moderation in the face of values disagreements. Hope that helps articulate where I’m coming from in your language. This is hard to write and think about.
Why do you think it’s unusual? I would strongly suspect that the majority of people have never examined their moral beliefs carefully and so their moral responses are “intuitive”—they go by gut feeling, basically. I think that’s the normal mode in which most of humanity operates most of the time.
I think other people are significantly more responsive to values disagreements than Brian is, and that this suggests they are significantly more open to the possibility that their idiosyncratic personal values judgments are mistaken. You can get a sense of how unusual Brian’s perspectives are by examining his website, where his discussions of negative utilitarianism and insect suffering stand out.
That’s a pretty meaningless statement without specifying which values. How responsive “other people” would be to value disagreements about child pornography, for example, do you think?
I suspect Nick would say that if there were respected elites who favored increasing the amount of child pornography, he would give some weight to the possibility that such a position was in fact something he would come to agree with upon further reflection.
Or, most likely of all, it’s because I don’t care to justify it. If you want to call “not wanting to justify a stance” a bias or blind spot, I’m ok with that.
:)