To imagine being someone else is something people (including you, I assume) do all the time. Maybe it’s difficult to do, and certainly your imagination will never be perfectly accurate to their experience, but it is not incoherent.
But the scenario isn’t about imagining anything, it’s…
I do see your point: you can’t become someone else because you would then no longer be you.
… exactly.
What you are arguing is that through your view of personal identity (which, based on your comments, I presume is in line with Closed Individualism), the Veil of Ignorance is not an accurate description of reality. Sure, I don’t disagree there.
I think it would be helpful to first specify your preferred theory of identity, rather than dismissing the VOI as nonsense altogether. That way, if you think Closed Individualism is obviously true, you could have a productive conversation with someone who disagrees with you on that.
I’ve avoided engaging with the whole “views of personal identity” stuff, because “empty individualism” and “open individualism” are so obviously absurd if taken anything like literally. People sometimes bring up these things in a way that seems like positing weird hypotheticals for convoluted rhetorical reasons, but if someone really believes either of these things, I generally assume that person either is bizarrely credulous when presented with superficially coherent-seeming arguments, or has done too much psychedelic drugs.
In any case it seems a moot point. OP is very clear that his argument is perfectly meaningful, and works, under “closed individualism”:
If you use the lens of Closed Individualism, before you are born, you could be born as any possible sentient being. In other words, the set of all conscious beings born in a given moment is in a sense all the beings exiting the veil of ignorance.
Now, as I’ve said, this is complete nonsense. But it’s definitely a claim that was made in the OP’s post, not some additional assumption that I am making. So as far as I can tell, there is no need, for the purposes of this discussion that we’ve been having, to talk about which “view of personal identity” is correct.
No, the goal of the thought experiment is to argue that you should want to do this. If you start out already wanting to do this, then the thought experiment is redundant and unmotivated.
If you accept sentientism, I think it is still very useful to consider what changes to our current world would bring us closer to an ideal world under that framework. This is how I personally answer “what is the right thing for me to do?”
Whether to accept “sentientism” is the thing that you (or the OP, at least) are supposedly arguing for in the first place… if you already accept the thing being argued for, great, but then any reasoning that takes that as a baseline is obviously irrelevant to criticisms of the argument for accepting that thing!
So, yes, as a point of metaethics, we recognize that aliens won’t share our morality, etc. But this has zero effect on our ethics. It’s simply irrelevant to ethical questions—a non sequitur.
I don’t see why a meta-ethical theory cannot inform your ethics.
I didn’t make any such claim. I was talking about this specific metaethical claim, and its effects (or lack thereof) on our ethical views.
Believing that my specific moral preferences do not extend to everyone else has certainly helped me answer “what is the right thing for me to do?”
Now, I do want to be sure that we’re clear here: if by “my specific moral preferences do not extend to everyone else” you are referring to other humans, then Eliezer’s metaethical view definitely does not agree with you on this!
I believe that humans (and superintelligences) should treat all sentient beings with compassion. If the Veil of Ignorance encourages people to consider the perspective of other beings and reflect on the specific circumstances that have cultivated their personal moral beliefs, I consider it useful and think that endorsing it is the right thing for me to do.
Once again I would like to point out that this cannot possibly be relevant to any argument about whether Rawlsian reasoning is correct (or coherent) or not, since what you say here is based on already having accepted the conclusion of said reasoning.
In essence you are saying: “I believe X. If argument Y convinces people to believe X, then it’s good to convince people that argument Y is valid.”
This is a total non sequitur in response to the question “is argument Y valid?”.
In other words, your bottom line is already written, and you are treating the “veil of ignorance” argument as a tool for inducing agreement with your views. Now, even setting aside the fact that this is inherently manipulative behavior, the fact is that none of this can possibly help us figure out if the “veil of ignorance” argument is actually valid or not.
What I meant was: caring about another sentient being experiencing pain/pleasure, primarily because you can imagine what they are experiencing and how un/desirable it is, indicates that you care about experienced positive and subjective states, and so this care applies to all beings capable of experiencing such states.
This would seem to be circular reasoning. What you’re saying here boils down to “if you care about any experienced pain/pleasure, then you care about any experienced pain/pleasure”. But your claims were:
“… when you develop a moral/ethical framework that includes any minds outside of your own, it should logically extend to all minds that exist/will ever exist.” In other words, caring about anyone else means you should probably care about all sentient beings.
Now, this might be true if the reason why your moral/ethical framework includes any minds outside of your own—the reason why you care about anyone else—is in fact that you “care about experienced positive and subjective [did you mean ‘negative’ here, by the way?] states”, in a generic and abstract way. But that need not be the case (and indeed is untrue for most people), and thus it need not be reason why you care about other people.
I care about my mother, for example. The reason why I care about my mother is not that I started with an abstract principle of caring about “experienced pain/pleasure” with no further specificity, and then reasoned that my mother experiences pain and pleasure, therefore my mother is a member of a class of entities that I care about, therefore I care about my mother. Of course not! Approximately 0.00% of all humans think like this. And yet many more than 0.00% of all humans care about other people—because most of them care about other people for reasons that look very different from this sort of “start with the most general possible ideas about valence of experience and then reason from that down to ‘don’t kick puppies’” stuff.
But the scenario isn’t about imagining anything, it’s…
… exactly.
I’ve avoided engaging with the whole “views of personal identity” stuff, because “empty individualism” and “open individualism” are so obviously absurd if taken anything like literally. People sometimes bring up these things in a way that seems like positing weird hypotheticals for convoluted rhetorical reasons, but if someone really believes either of these things, I generally assume that person either is bizarrely credulous when presented with superficially coherent-seeming arguments, or has done too much psychedelic drugs.
In any case it seems a moot point. OP is very clear that his argument is perfectly meaningful, and works, under “closed individualism”:
Now, as I’ve said, this is complete nonsense. But it’s definitely a claim that was made in the OP’s post, not some additional assumption that I am making. So as far as I can tell, there is no need, for the purposes of this discussion that we’ve been having, to talk about which “view of personal identity” is correct.
Whether to accept “sentientism” is the thing that you (or the OP, at least) are supposedly arguing for in the first place… if you already accept the thing being argued for, great, but then any reasoning that takes that as a baseline is obviously irrelevant to criticisms of the argument for accepting that thing!
I didn’t make any such claim. I was talking about this specific metaethical claim, and its effects (or lack thereof) on our ethical views.
Now, I do want to be sure that we’re clear here: if by “my specific moral preferences do not extend to everyone else” you are referring to other humans, then Eliezer’s metaethical view definitely does not agree with you on this!
Once again I would like to point out that this cannot possibly be relevant to any argument about whether Rawlsian reasoning is correct (or coherent) or not, since what you say here is based on already having accepted the conclusion of said reasoning.
In essence you are saying: “I believe X. If argument Y convinces people to believe X, then it’s good to convince people that argument Y is valid.”
This is a total non sequitur in response to the question “is argument Y valid?”.
In other words, your bottom line is already written, and you are treating the “veil of ignorance” argument as a tool for inducing agreement with your views. Now, even setting aside the fact that this is inherently manipulative behavior, the fact is that none of this can possibly help us figure out if the “veil of ignorance” argument is actually valid or not.
This would seem to be circular reasoning. What you’re saying here boils down to “if you care about any experienced pain/pleasure, then you care about any experienced pain/pleasure”. But your claims were:
Now, this might be true if the reason why your moral/ethical framework includes any minds outside of your own—the reason why you care about anyone else—is in fact that you “care about experienced positive and subjective [did you mean ‘negative’ here, by the way?] states”, in a generic and abstract way. But that need not be the case (and indeed is untrue for most people), and thus it need not be reason why you care about other people.
I care about my mother, for example. The reason why I care about my mother is not that I started with an abstract principle of caring about “experienced pain/pleasure” with no further specificity, and then reasoned that my mother experiences pain and pleasure, therefore my mother is a member of a class of entities that I care about, therefore I care about my mother. Of course not! Approximately 0.00% of all humans think like this. And yet many more than 0.00% of all humans care about other people—because most of them care about other people for reasons that look very different from this sort of “start with the most general possible ideas about valence of experience and then reason from that down to ‘don’t kick puppies’” stuff.