You are imagining that you can become any of the sentient minds entering existence at any given moment, and that you will inherit their specific circumstances and subjective preferences.
But of course I can’t do that either. Nor can you, nor can anyone else. And this is just as nonsensical, in multiple ways, as the “disembodied spirits” scenario; for example, if “you” “become” a different sentient mind that is entering existence at some moment, then in what sense are “you” still the “you” that did the “becoming”? This scenario is just as incoherent as the other one!
The goal of the thought experiment, then, is to construct a desirable world that encompasses the specific desires and preferences of all sentient beings.[1]
No, the goal of the thought experiment is to argue that you should want to do this. If you start out already wanting to do this, then the thought experiment is redundant and unmotivated.
The thought experiment is supposed to be somewhat practical, so I don’t think you need to consider aliens, or the entire set of all possible minds that can ever exist.
It is a strange “somewhat practical” thought experiment, that posits a scenario that is so thoroughly impossible and incoherent!
With no constraints, your answer might look like “give every sentient being a personal utopia.” Our individual ability to change the real world is heavily constrained, though, so some realistic takeaways can be …
These suggestions, of course, assume that the thought experiment has already persuaded you… which, again, would make it redundant.
I (taking your question seriously) answer that, obviously, the master should command the slave, and the slave should obey the master...
This would not be a faithful execution of the thought experiment, as you would obviously be ignoring the possibility of existing as a slave (which, presumably, does not prefer to be enslaved).
Uh… I think you have not understood my point in offering that hypothetical. I will ask that you please reread what I wrote. (If you still don’t see what I was saying after rereading, I will attempt to re-explain.)
In a sense, the veil of ignorance is an exercise in rationality – you are recognizing that your mind (and every other mind) is shaped by its specific circumstances, so your personal conception of what’s moral and good doesn’t necessarily extend to all other sentient beings. If I’m not mistaken, this seems to be in line with Eliezers position on meta-ethics.
But the Rawlsian argument is not a metaethical argument. It is an object-level ethical argument.
You’re quite right: Eliezer’s position on metaethics (with which I wholly agree) is that my personal conception (and, broadly construed, the human conception) of what’s moral and good not only does not necessarily extend, but definitely does not extend, to all other sapient[1] beings. Another way of putting it (and this is closer to Eliezer’s own formulation) is that “morality” synonymous with “human morality”. There isn’t any such thing as “alien morality”; what the aliens have is something else, not morality.
But, again, that is Eliezer’s position on metaethics—not on ethics. His position on ethics is just various object-level claims like “more life is better than less life” and “you shouldn’t lie to people to coerce them into doing what you want” and “stealing candy from babies is wrong” and so on.
So, yes, as a point of metaethics, we recognize that aliens won’t share our morality, etc. But this has zero effect on our ethics. It’s simply irrelevant to ethical questions—a non sequitur.
None of this has any resemblance to Rawlsian reasoning, which, again, is an ethical argument, not a metaethical one.
I prefer to describe it as “your mind emerged from physical matter, and developed all of its properties based on that matter, so when you develop a moral/ethical framework that includes any minds outside of your own, it should logically extend to all minds that exist/will ever exist.”
“Logically”? Do please elaborate! What exactly is the logical chain of reasoning here? As far as I can tell, this is a logical leap, not a valid inference.
In other words, caring about anyone else means you should probably care about all sentient beings.
But… why?
Is there something incoherent about caring about only some people/things/entities and not others? Surely there isn’t. I seem to be able to care about only a subset of existing entities. Nothing appears to prevent me from doing so. What exactly does it mean that I “should” care about things that I do not, in fact, care about?
To say this of merely sentient beings—in the sense of “beings who can experience sensations”—is also true, but trivial; out of all sentient beings known to us, humans are the only ones that are sapient (a.k.a. self-aware); cows, chickens, chimpanzees, etc., are not capable of having any “conception of what’s moral and good”, so the fact that they don’t have our conception of what’s moral and good, specifically, is wholly uninteresting.
To refine this discussion, I’ll only be responding to what I think are major points of disagreement.
This scenario is just as incoherent as the other one!
No, it is not.
Imagining yourself as a “disembodied spirit,” or a mind with no properties, is completely incoherent.
To imagine being someone else is something people (including you, I assume) do all the time. Maybe it’s difficult to do, and certainly your imagination will never be perfectly accurate to their experience, but it is not incoherent.
I do see your point: you can’t become someone else because you would then no longer be you. What you are arguing is that through your view of personal identity (which, based on your comments, I presume is in line with Closed Individualism), the Veil of Ignorance is not an accurate description of reality. Sure, I don’t disagree there.
I think it would be helpful to first specify your preferred theory of identity, rather than dismissing the VOI as nonsense altogether. That way, if you think Closed Individualism is obviously true, you could have a productive conversation with someone who disagrees with you on that.
No, the goal of the thought experiment is to argue that you should want to do this. If you start out already wanting to do this, then the thought experiment is redundant and unmotivated.
If you accept sentientism, I think it is still very useful to consider what changes to our current world would bring us closer to an ideal world under that framework. This is how I personally answer “what is the right thing for me to do?”
So, yes, as a point of metaethics, we recognize that aliens won’t share our morality, etc. But this has zero effect on our ethics. It’s simply irrelevant to ethical questions—a non sequitur.
I don’t see why a meta-ethical theory cannot inform your ethics. Believing that my specific moral preferences do not extend to everyone else has certainly helped me answer “what is the right thing for me to do?”
I believe that humans (and superintelligences) should treat all sentient beings with compassion. If the Veil of Ignorance encourages people to consider the perspective of other beings and reflect on the specific circumstances that have cultivated their personal moral beliefs, I consider it useful and think that endorsing it is the right thing for me to do.
Of course, this should have no direct bearing on your beliefs, and you are perfectly free to do whatever maximizes your internal moral reward function.
Is there something incoherent about caring about only some people/things/entities and not others? Surely there isn’t.
I think we agree here.
What I meant was: caring about another sentient being experiencing pain/pleasure, primarily because you can imagine what they are experiencing and how un/desirable it is, indicates that you care about experienced positive and subjective states, and so this care applies to all beings capable of experiencing such states.
I generally accept Eliezers meta-ethical theory and so I don’t assume this necessarily applies to anybody else.
To imagine being someone else is something people (including you, I assume) do all the time. Maybe it’s difficult to do, and certainly your imagination will never be perfectly accurate to their experience, but it is not incoherent.
But the scenario isn’t about imagining anything, it’s…
I do see your point: you can’t become someone else because you would then no longer be you.
… exactly.
What you are arguing is that through your view of personal identity (which, based on your comments, I presume is in line with Closed Individualism), the Veil of Ignorance is not an accurate description of reality. Sure, I don’t disagree there.
I think it would be helpful to first specify your preferred theory of identity, rather than dismissing the VOI as nonsense altogether. That way, if you think Closed Individualism is obviously true, you could have a productive conversation with someone who disagrees with you on that.
I’ve avoided engaging with the whole “views of personal identity” stuff, because “empty individualism” and “open individualism” are so obviously absurd if taken anything like literally. People sometimes bring up these things in a way that seems like positing weird hypotheticals for convoluted rhetorical reasons, but if someone really believes either of these things, I generally assume that person either is bizarrely credulous when presented with superficially coherent-seeming arguments, or has done too much psychedelic drugs.
In any case it seems a moot point. OP is very clear that his argument is perfectly meaningful, and works, under “closed individualism”:
If you use the lens of Closed Individualism, before you are born, you could be born as any possible sentient being. In other words, the set of all conscious beings born in a given moment is in a sense all the beings exiting the veil of ignorance.
Now, as I’ve said, this is complete nonsense. But it’s definitely a claim that was made in the OP’s post, not some additional assumption that I am making. So as far as I can tell, there is no need, for the purposes of this discussion that we’ve been having, to talk about which “view of personal identity” is correct.
No, the goal of the thought experiment is to argue that you should want to do this. If you start out already wanting to do this, then the thought experiment is redundant and unmotivated.
If you accept sentientism, I think it is still very useful to consider what changes to our current world would bring us closer to an ideal world under that framework. This is how I personally answer “what is the right thing for me to do?”
Whether to accept “sentientism” is the thing that you (or the OP, at least) are supposedly arguing for in the first place… if you already accept the thing being argued for, great, but then any reasoning that takes that as a baseline is obviously irrelevant to criticisms of the argument for accepting that thing!
So, yes, as a point of metaethics, we recognize that aliens won’t share our morality, etc. But this has zero effect on our ethics. It’s simply irrelevant to ethical questions—a non sequitur.
I don’t see why a meta-ethical theory cannot inform your ethics.
I didn’t make any such claim. I was talking about this specific metaethical claim, and its effects (or lack thereof) on our ethical views.
Believing that my specific moral preferences do not extend to everyone else has certainly helped me answer “what is the right thing for me to do?”
Now, I do want to be sure that we’re clear here: if by “my specific moral preferences do not extend to everyone else” you are referring to other humans, then Eliezer’s metaethical view definitely does not agree with you on this!
I believe that humans (and superintelligences) should treat all sentient beings with compassion. If the Veil of Ignorance encourages people to consider the perspective of other beings and reflect on the specific circumstances that have cultivated their personal moral beliefs, I consider it useful and think that endorsing it is the right thing for me to do.
Once again I would like to point out that this cannot possibly be relevant to any argument about whether Rawlsian reasoning is correct (or coherent) or not, since what you say here is based on already having accepted the conclusion of said reasoning.
In essence you are saying: “I believe X. If argument Y convinces people to believe X, then it’s good to convince people that argument Y is valid.”
This is a total non sequitur in response to the question “is argument Y valid?”.
In other words, your bottom line is already written, and you are treating the “veil of ignorance” argument as a tool for inducing agreement with your views. Now, even setting aside the fact that this is inherently manipulative behavior, the fact is that none of this can possibly help us figure out if the “veil of ignorance” argument is actually valid or not.
What I meant was: caring about another sentient being experiencing pain/pleasure, primarily because you can imagine what they are experiencing and how un/desirable it is, indicates that you care about experienced positive and subjective states, and so this care applies to all beings capable of experiencing such states.
This would seem to be circular reasoning. What you’re saying here boils down to “if you care about any experienced pain/pleasure, then you care about any experienced pain/pleasure”. But your claims were:
“… when you develop a moral/ethical framework that includes any minds outside of your own, it should logically extend to all minds that exist/will ever exist.” In other words, caring about anyone else means you should probably care about all sentient beings.
Now, this might be true if the reason why your moral/ethical framework includes any minds outside of your own—the reason why you care about anyone else—is in fact that you “care about experienced positive and subjective [did you mean ‘negative’ here, by the way?] states”, in a generic and abstract way. But that need not be the case (and indeed is untrue for most people), and thus it need not be reason why you care about other people.
I care about my mother, for example. The reason why I care about my mother is not that I started with an abstract principle of caring about “experienced pain/pleasure” with no further specificity, and then reasoned that my mother experiences pain and pleasure, therefore my mother is a member of a class of entities that I care about, therefore I care about my mother. Of course not! Approximately 0.00% of all humans think like this. And yet many more than 0.00% of all humans care about other people—because most of them care about other people for reasons that look very different from this sort of “start with the most general possible ideas about valence of experience and then reason from that down to ‘don’t kick puppies’” stuff.
But of course I can’t do that either. Nor can you, nor can anyone else. And this is just as nonsensical, in multiple ways, as the “disembodied spirits” scenario; for example, if “you” “become” a different sentient mind that is entering existence at some moment, then in what sense are “you” still the “you” that did the “becoming”? This scenario is just as incoherent as the other one!
No, the goal of the thought experiment is to argue that you should want to do this. If you start out already wanting to do this, then the thought experiment is redundant and unmotivated.
It is a strange “somewhat practical” thought experiment, that posits a scenario that is so thoroughly impossible and incoherent!
These suggestions, of course, assume that the thought experiment has already persuaded you… which, again, would make it redundant.
Uh… I think you have not understood my point in offering that hypothetical. I will ask that you please reread what I wrote. (If you still don’t see what I was saying after rereading, I will attempt to re-explain.)
But the Rawlsian argument is not a metaethical argument. It is an object-level ethical argument.
You’re quite right: Eliezer’s position on metaethics (with which I wholly agree) is that my personal conception (and, broadly construed, the human conception) of what’s moral and good not only does not necessarily extend, but definitely does not extend, to all other sapient[1] beings. Another way of putting it (and this is closer to Eliezer’s own formulation) is that “morality” synonymous with “human morality”. There isn’t any such thing as “alien morality”; what the aliens have is something else, not morality.
But, again, that is Eliezer’s position on metaethics—not on ethics. His position on ethics is just various object-level claims like “more life is better than less life” and “you shouldn’t lie to people to coerce them into doing what you want” and “stealing candy from babies is wrong” and so on.
So, yes, as a point of metaethics, we recognize that aliens won’t share our morality, etc. But this has zero effect on our ethics. It’s simply irrelevant to ethical questions—a non sequitur.
None of this has any resemblance to Rawlsian reasoning, which, again, is an ethical argument, not a metaethical one.
“Logically”? Do please elaborate! What exactly is the logical chain of reasoning here? As far as I can tell, this is a logical leap, not a valid inference.
But… why?
Is there something incoherent about caring about only some people/things/entities and not others? Surely there isn’t. I seem to be able to care about only a subset of existing entities. Nothing appears to prevent me from doing so. What exactly does it mean that I “should” care about things that I do not, in fact, care about?
To say this of merely sentient beings—in the sense of “beings who can experience sensations”—is also true, but trivial; out of all sentient beings known to us, humans are the only ones that are sapient (a.k.a. self-aware); cows, chickens, chimpanzees, etc., are not capable of having any “conception of what’s moral and good”, so the fact that they don’t have our conception of what’s moral and good, specifically, is wholly uninteresting.
To refine this discussion, I’ll only be responding to what I think are major points of disagreement.
No, it is not.
Imagining yourself as a “disembodied spirit,” or a mind with no properties, is completely incoherent.
To imagine being someone else is something people (including you, I assume) do all the time. Maybe it’s difficult to do, and certainly your imagination will never be perfectly accurate to their experience, but it is not incoherent.
I do see your point: you can’t become someone else because you would then no longer be you. What you are arguing is that through your view of personal identity (which, based on your comments, I presume is in line with Closed Individualism), the Veil of Ignorance is not an accurate description of reality. Sure, I don’t disagree there.
I think it would be helpful to first specify your preferred theory of identity, rather than dismissing the VOI as nonsense altogether. That way, if you think Closed Individualism is obviously true, you could have a productive conversation with someone who disagrees with you on that.
If you accept sentientism, I think it is still very useful to consider what changes to our current world would bring us closer to an ideal world under that framework. This is how I personally answer “what is the right thing for me to do?”
I don’t see why a meta-ethical theory cannot inform your ethics. Believing that my specific moral preferences do not extend to everyone else has certainly helped me answer “what is the right thing for me to do?”
I believe that humans (and superintelligences) should treat all sentient beings with compassion. If the Veil of Ignorance encourages people to consider the perspective of other beings and reflect on the specific circumstances that have cultivated their personal moral beliefs, I consider it useful and think that endorsing it is the right thing for me to do.
Of course, this should have no direct bearing on your beliefs, and you are perfectly free to do whatever maximizes your internal moral reward function.
I think we agree here.
What I meant was: caring about another sentient being experiencing pain/pleasure, primarily because you can imagine what they are experiencing and how un/desirable it is, indicates that you care about experienced positive and subjective states, and so this care applies to all beings capable of experiencing such states.
I generally accept Eliezers meta-ethical theory and so I don’t assume this necessarily applies to anybody else.
But the scenario isn’t about imagining anything, it’s…
… exactly.
I’ve avoided engaging with the whole “views of personal identity” stuff, because “empty individualism” and “open individualism” are so obviously absurd if taken anything like literally. People sometimes bring up these things in a way that seems like positing weird hypotheticals for convoluted rhetorical reasons, but if someone really believes either of these things, I generally assume that person either is bizarrely credulous when presented with superficially coherent-seeming arguments, or has done too much psychedelic drugs.
In any case it seems a moot point. OP is very clear that his argument is perfectly meaningful, and works, under “closed individualism”:
Now, as I’ve said, this is complete nonsense. But it’s definitely a claim that was made in the OP’s post, not some additional assumption that I am making. So as far as I can tell, there is no need, for the purposes of this discussion that we’ve been having, to talk about which “view of personal identity” is correct.
Whether to accept “sentientism” is the thing that you (or the OP, at least) are supposedly arguing for in the first place… if you already accept the thing being argued for, great, but then any reasoning that takes that as a baseline is obviously irrelevant to criticisms of the argument for accepting that thing!
I didn’t make any such claim. I was talking about this specific metaethical claim, and its effects (or lack thereof) on our ethical views.
Now, I do want to be sure that we’re clear here: if by “my specific moral preferences do not extend to everyone else” you are referring to other humans, then Eliezer’s metaethical view definitely does not agree with you on this!
Once again I would like to point out that this cannot possibly be relevant to any argument about whether Rawlsian reasoning is correct (or coherent) or not, since what you say here is based on already having accepted the conclusion of said reasoning.
In essence you are saying: “I believe X. If argument Y convinces people to believe X, then it’s good to convince people that argument Y is valid.”
This is a total non sequitur in response to the question “is argument Y valid?”.
In other words, your bottom line is already written, and you are treating the “veil of ignorance” argument as a tool for inducing agreement with your views. Now, even setting aside the fact that this is inherently manipulative behavior, the fact is that none of this can possibly help us figure out if the “veil of ignorance” argument is actually valid or not.
This would seem to be circular reasoning. What you’re saying here boils down to “if you care about any experienced pain/pleasure, then you care about any experienced pain/pleasure”. But your claims were:
Now, this might be true if the reason why your moral/ethical framework includes any minds outside of your own—the reason why you care about anyone else—is in fact that you “care about experienced positive and subjective [did you mean ‘negative’ here, by the way?] states”, in a generic and abstract way. But that need not be the case (and indeed is untrue for most people), and thus it need not be reason why you care about other people.
I care about my mother, for example. The reason why I care about my mother is not that I started with an abstract principle of caring about “experienced pain/pleasure” with no further specificity, and then reasoned that my mother experiences pain and pleasure, therefore my mother is a member of a class of entities that I care about, therefore I care about my mother. Of course not! Approximately 0.00% of all humans think like this. And yet many more than 0.00% of all humans care about other people—because most of them care about other people for reasons that look very different from this sort of “start with the most general possible ideas about valence of experience and then reason from that down to ‘don’t kick puppies’” stuff.