Thanks for such an in depth response! I’ll just jump right in. I haven’t deeply proofread this, so please take it with a grain of salt
The point of a moral/ethical framework of any sort—the point of ethics, generally—is to provide you with an answer to the question “what is the right thing for me to do”.
I’m not trying to frame the veil of ignorance (VOI) as a moral or ethical framework that answers that question. I’m arguing for the VOI as a meta-meta-ethical framework, which grounds the meta-ethical framework of sentientism, which can ground many different object-level frameworks that answer “what is the right thing for me to do”, as long as those object-level frameworks consider all sentient beings as morally relevant.
We exist as physical beings in a specific place, time, social and historical context, material conditions, etc. (And how could it be otherwise?) Our thoughts (beliefs, desires, preferences, personality, etc.) are the products of physical processes. _We_ are the products of physical processes. _That includes our preferences, and our moral intuitions, and our beliefs about morality._ These things don’t come from nowhere! They are the products of our specific neural makeup, which itself is the product of specific evolutionary circumstances, specific cultural circumstances, etc.
100% agree with you here.
“Imagine that you are a disembodied, impersonal spirit, existing in a void, having no identity, no desires, no interests, no personality, and no history (but you can think somehow)” is basically gibberish. It’s a well-formed sentence and it seems to be saying something, but if you actually try to imagine this scenario, and follow the implications of what you’ve just been told, you run directly into several brick walls simultaneously. The whole thought experiment, and the argument the follows from it, is just the direst nonsense.
I agree that nobody can do literally that. I do think that doing your best at that will allow you to be a lot more impartial. Minor nitpick, the imagined disembodied spirit should have desires and interests in the thought experiment, at the very least, the desire to not experience suffering when they’re born.
So ethics can’t have anything to say about what you should do if you find yourself in this hypothetical situation of being behind the veil of ignorance
I agree, in the post I even point out that from behind the veil you could endorse other positions for outside the veil, such as being personally selfish even at others expense. The point of the thought experiment is that thinking about it can help you refine your views on how you think you should act. The point is not to tell you what to do if you find yourself behind the veil of ignorance, which as you say is incoherent.
There isn’t any pre-existing thinking entity which gets embodied
I’m not following how this rules it out from being an analogy. My understanding of analogies is that they don’t need to be exactly the same for the relevant similarity to help transfer the understanding.
Another way to put it is that you are asking us (by extending what Rawls is asking us) to perform a mental operation that is something like “imagine that you could have been a chicken instead of a human”.
Well, yeah, that is almost exactly what I’m doing! Except generalized to all sentient beings :) I don’t see why you would take so much issue with a question like that? There are many things we don’t (and likely can’t) know about chickens internal experiences, but there’s a lot of very important and useful ground that can be covered from asking that question because there is a lot we can know to a high degree of confidence. If I were asked that, I would look at our understanding of chicken neurology, and how they respond to different drugs like painkillers and pleasurable ones, and our understanding of evolutionary psychology and what kinds of mental patterns would lead to chickens behaving in the ways that they do. I could not give an exact answer, but if I was a chicken I’m almost certain I’d experience positive valence eating corn and fruit and bugs and negative valence if I got hit or I broke a bone or whatever, and that’s just what I’m highly confident on. With enough time and thought I’m sure I could discuss a wide range of experiences with a wide range of how confident I am at how I’d experience them as a chicken. Even though it would be impossible for me writing this to ever actually experience those things, it’s still easy to take my understanding of the world and apply it in a thought experiment.
Now, the obvious question to ask is whether there’s anything you can do to convince me that the reasoning from behind the veil of ignorance should proceed as you say, and not as I (in this hypothetical scenario) say; or, is there nothing more you can say to me? And if the latter—what, if anything, does this prove about the original position argument? (At the very least, it would seem to highlight the fact that Rawls’s reasoning is shaky even granting the coherence of his hypothetical!)
This I resonate much more with. If someone would genuinely be happy with a coin flip deciding whether they’re master or slave, I don’t think there’s anything I could say to convince them against slavery.
It sure seems awfully convenient that when you posit these totally impersonal disembodied spirits, they turn out to have the moral beliefs of modern Westerners. Why should that be the case? Our own moral intuitions, again, don’t come from nowhere. What if we find ourselves behind the veil of ignorance, and all the disembodied spirits are like, “yeah, the slave should obey the master, etc.”?
I don’t think they ought to have the moral beliefs of modern westerners. I think I’m probably wrong or confused or misguided about a lot of moral questions, I think probably everyone is, modern westerner or not. The slave question is sillier on the assumption that they don’t want to be slaves, if they’re equally fine with being a slave or master it wouldn’t be very silly of them.
As Eliezer comments in [his writings about metaethics](https://www.readthesequences.com/Book-V-Mere-Goodness), human morality is just that—human. It’s the product of our evolutionary history, and it’s instantiated in our neural makeup. It doesn’t exist outside of human brains.
Once again, absolutely agree.
On the pebblesorters question, my interpretation of that story was that we humans do a mind numbing amount of things equally as silly as the pebblesorters. To take just one example, music is just arranging patterns of sound waves in the air in the “right” way, which is no more or less silly than the “right” pebble stacks. Behind the human/chicken/pebblesorter/etc veil, I would argue that all of us look extremely silly! Behind the veil likely wouldn’t care all that much about fairness/justice from behind the veil, beyond how it might impact valence.
The bottom line is that Rawls’s argument is an [intuition pump](https://en.wikipedia.org/wiki/Intuition_pump), in Dennett’s derogatory sense of the term. It is designed to lead you to a conclusion, while obscuring the implications of the argument, and discouraging you from thinking about the details. Once looked at, those implications and details show clearly: the argument simply does not work.
I do in fact want to lead towards sentientism. Is it fair to use it in the derogatory way if I’m quite clear and explicit about that? I already described this whole post as an intuition pump before the introduction, I just think that transparent intuition pumps are not just fine, but they can be quite useful and good.
I’m not trying to frame the veil of ignorance (VOI) as a moral or ethical framework that answers that question. I’m arguing for the VOI as a meta-meta-ethical framework, which grounds the meta-ethical framework of sentientism, which can ground many different object-level frameworks that answer “what is the right thing for me to do”, as long as those object-level frameworks consider all sentient beings as morally relevant.
I think you’ve got your meta-levels mixed up. For one thing, there isn’t any such thing as “meta-meta-ethics”; there’s just metaethics, and anything “more meta” than that is still metaethics. For another thing, “sentientism” is definitely object-level ethics; metaethics is about how to reason about ethics—definitionally it cannot include any ethical principles, which “sentientism” clearly is. This really seems like an attempt to sneak in object-level claims by labeling them as “meta-level” considerations.
I agree that nobody can do literally that [“Imagine that you are a disembodied, impersonal spirit” etc.] I do think that doing your best at that will allow you to be a lot more impartial.
There is no such thing as “doing your best at” imagining an incoherent scenario. Or, to put it another way, to do your best at doing this is to not do it. That is the best. Any attempt to imagine a scenario which we have already established is incoherent is less than the best. To attempt this is nothing more than to confuse yourself.
This really is a very common sort of mistake, I find.[1] “Doing X is impossible, because X is not coherently defined in the first place.” “But what if we do our best to do X as well as we can?” If you say this, then you have not understood the point: we cannot “do our best at” something which cannot be done at all, because it isn’t “a thing” in the first place. You have words, but those words do not refer to anything.
Minor nitpick, the imagined disembodied spirit should have desires and interests in the thought experiment, at the very least, the desire to not experience suffering when they’re born.
But this, too, is nonsense. “They” (the disembodied spirits) will not experience anything at all when “they” are born—because “they” will cease to exist as soon as they’re embodied. There is no posited continuity of consciousness, experience, memory, or anything between the disembodied spirit and the incarnated being. (The whole thought experiment really is fractally nonsensical.)
The point of the thought experiment is that thinking about it can help you refine your views on how you think you should act.
Yes, that claim is what makes it an intuition pump—but as I said, it doesn’t work, because the hypothetical scenario in the thought experiment has no bearing on any situation we could ever encounter in the real world, has no resemblance to any situation we could ever encounter in the real world, etc.
I’m not following how this rules it out from being an analogy. My understanding of analogies is that they don’t need to be exactly the same for the relevant similarity to help transfer the understanding.
But this isn’t just a case of “not exactly the same”. Nothing approximately like, or even remotely resembling, the hypothetical scenario, actually takes place.
I don’t see why you would take so much issue with a question like that? There are many things we don’t (and likely can’t) know about chickens internal experiences, but …
Uh… I’m afraid that nothing in this paragraph is even slightly responsive to the part of my comment that you’re responding to. I’m honestly not sure how it’s even supposed to be, or how it could be. Basically all of those things seem like non sequiturs.
With enough time and thought I’m sure I could discuss a wide range of experiences with a wide range of how confident I am at how I’d experience them as a chicken. Even though it would be impossible for me writing this to ever actually experience those things, it’s still easy to take my understanding of the world and apply it in a thought experiment.
Like the rest of this paragraph, this is non-responsive to my comment, but I am curious: do you have a principled disagreement with all of the arguments for why nothing remotely like this is possible even in principle, or… are you not familiar with them? (Thomas Nagel’s being the most famous one, of course.)
I don’t think they ought to have the moral beliefs of modern westerners. I think I’m probably wrong or confused or misguided about a lot of moral questions, I think probably everyone is, modern westerner or not. The slave question is sillier on the assumption that they don’t want to be slaves, if they’re equally fine with being a slave or master it wouldn’t be very silly of them.
This seems hard to square with the positions you take on all the stuff in your post…
On the pebblesorters question, my interpretation of that story was that we humans do a mind numbing amount of things equally as silly as the pebblesorters. To take just one example, music is just arranging patterns of sound waves in the air in the “right” way, which is no more or less silly than the “right” pebble stacks. Behind the human/chicken/pebblesorter/etc veil, I would argue that all of us look extremely silly! Behind the veil likely wouldn’t care all that much about fairness/justice from behind the veil, beyond how it might impact valence.
I think there’s some very deep confusion here… are you familiar with Eliezer’s writing on metaethics? (I don’t know whether that would necessarily resolve any of the relevant confusions or disagreements here, but it’s the first thing that comes to mind as a jumping-off point for untangling this.)
One sees it in discussions of utilitarianism, for instance. “Interpersonal comparisons of decision-theoretic utility are incoherent as a concept.” “Can’t we sort of do our best to approximately compare utilities across agents, though?” No, we can’t, because there isn’t anything to approximate. There is no ground truth that we can approach with an estimate. The comparison does not mean anything.
On Yudkowsky, keep in mind he’s written at least two (that I know of) detailed fictions imagining and exploring impossible/incoherent scenarios—HPMOR, and planecrash. I’ve read the former and am partway through reading the latter. If someone says “imagining yourself in a world with magic can help tune your rationality skills,” you certainly could dismiss that by saying that’s an impossible situation so the best you can do is not imagine it, and maybe your rationality skills are already at a level where the exercise would not provide any value. But at least for me, prompts like that and the veil of ignorance are useful for sharpening my thinking on rationality and ethics, respectively
On Yudkowsky, keep in mind he’s written at least two (that I know of) detailed fictions imagining and exploring impossible/incoherent scenarios—HPMOR, and planecrash.
Er… what?
HPMOR is impossible, of course; it’s got outright magic, etc. No argument there. But “incoherent”? How so…?
(As for Planecrash, I do think it’s kind of incoherent in places, but only in a boring literary sense, not in the sense we’re discussing here. But I didn’t read it to the end, so mostly my response to that one is “no comment”.)
If someone says “imagining yourself in a world with magic can help tune your rationality skills,” you certainly could dismiss that by saying that’s an impossible situation so the best you can do is not imagine it, and maybe your rationality skills are already at a level where the exercise would not provide any value. But at least for me, prompts like that and the veil of ignorance are useful for sharpening my thinking on rationality and ethics, respectively
If someone says “conclusions about morality reached from considering scenarios in a world with magic hold in actual real-world morality, even if you don’t validate them with reasoning about non-impossible situations”, then I will definitely dismiss that and I will be right to do so. Again: ethics is our attempt to answer the question “what is the right thing for me to do”. Reasoning about situations which are fundamentally impossible (for very strong reasons of outright incoherence, not mere violations of physical laws) cannot constitute a part of that answer.
(Also, yes, I am highly skeptical of “imagining yourself in a world with magic can help tune your rationality skills”. Rationality skills are mostly tuned by doing things. Thinking about things that you’ve done, or things that you have concrete plans to do, or things that other people you know have done, etc., is also useful. Thinking about things that other people you don’t know have done is less useful but might still be useful. Thinking about things that nobody has done or will ever do is very low on the totem pole of “activities that can assist you in honing your rationality skills”.)
I think you’ve got your meta-levels mixed up. For one thing, there isn’t any such thing as “meta-meta-ethics”; there’s just metaethics, and anything “more meta” than that is still metaethics. For another thing, “sentientism” is definitely object-level ethics; metaethics is about how to reason about ethics—definitionally it cannot include any ethical principles, which “sentientism” clearly is. This really seems like an attempt to sneak in object-level claims by labeling them as “meta-level” considerations.
Ah I see, yes I did have them mixed up. Thanks for the correction.
Yes, that claim is what makes it an intuition pump—but as I said, it doesn’t work, because the hypothetical scenario in the thought experiment has no bearing on any situation we could ever encounter in the real world, has no resemblance to any situation we could ever encounter in the real world, etc.
On the incoherence of the thought experiment, @neo’s comment explains it pretty well I thought. I will say that I think the thought experiment still works with imaginary minds, like the pebblesorters. If the pebblesorters actually exist and are sentient, then they are morally relevant.
But this isn’t just a case of “not exactly the same”. Nothing approximately like, or even remotely resembling, the hypothetical scenario, actually takes place.
What? In the thought experiment and the real world, a great deal of beings are born into a world that gives rise to a variety of valenced experiences. In the thought experiment, you are tasked with determining whether you would be ok with being the one finding themself in any given one of those lives/experiences.
Like the rest of this paragraph, this is non-responsive to my comment, but I am curious: do you have a principled disagreement with all of the arguments for why nothing remotely like this is possible even in principle, or… are you not familiar with them? (Thomas Nagel’s being the most famous one, of course.)
You said that it is impossible for you to have turned out to be a chicken, and so I can’t be talking to you if I say “imagine that you could have been a chicken instead of a human”. I demonstrated how to imagine that very thing, implying that I could indeed be talking to you when I ask that. I agree that it is impossible for you to turn into a chicken, or for you to have been born a chicken instead of you. I disagree that it is impossible to imagine and make educated guesses on the internal mental states of a chicken.
This seems hard to square with the positions you take on all the stuff in your post…
I’m not following, sorry. Can you give an example of a position I take in the post that’s inconsistent with what I said there?
I think there’s some very deep confusion here… are you familiar with Eliezer’s writing on metaethics? (I don’t know whether that would necessarily resolve any of the relevant confusions or disagreements here, but it’s the first thing that comes to mind as a jumping-off point for untangling this.)
Maybe? I’ve read the sequences twice, one of those times poring over ~5 posts at a time as part of a book club, but maybe his writing on metaethics isn’t in there. I think we are likely talking past each other, but I’m not sure exactly where the crux is. @neo described what I’m trying to get at pretty well, and I don’t know how to do better, so maybe that can highlight a new avenue of discussion? I do appreciate you taking the time to go into this with me though!
On the incoherence of the thought experiment, @neo’s comment explains it pretty well I thought.
See my response to that comment.
I will say that I think the thought experiment still works with imaginary minds, like the pebblesorters. If the pebblesorters actually exist and are sentient, then they are morally relevant.
“Works” how, exactly? For example, what are your actual answers to the specific questions I asked about that variant of the scenario?
What? In the thought experiment and the real world, a great deal of beings are born into a world that gives rise to a variety of valenced experiences. In the thought experiment, you are tasked with determining whether you would be ok with being the one finding themself in any given one of those lives/experiences.
In the real world, you only ever find yourself being the person who you turned out to be. There is never, ever, under any circumstances whatsoever, any choice you can make in this matter. You come into existence already being a specific person. There is nothing in reality which is even slightly analogous to there being any kind of reasoning entities that exist behind some sort of “veil of ignorance” prior to somehow becoming real.
I disagree that it is impossible to imagine and make educated guesses on the internal mental states of a chicken.
Ok… it seems that you totally ignored the question that I asked, in favor of restating a summary of your argument. I guess I appreciate the summary, but it wasn’t actually necessary. The question was not rhetorical; I would like to see your answer to it.
This seems hard to square with the positions you take on all the stuff in your post…
I’m not following, sorry. Can you give an example of a position I take in the post that’s inconsistent with what I said there?
I can, but this really seems like a tangent, since it concerns questions like “what beliefs would the disembodied spirits have and why”, which really seems like “how many angels can dance on the head of a pin”, given that the whole “disembodied spirits” concept is so thoroughly nonsensical in the first place. That part of your argument (and of Rawlsian reasoning generally) is an amusing incongruity, but on the whole it’s more of a distraction from the key points than anything.
I’ve read the sequences twice, one of those times poring over ~5 posts at a time as part of a book club, but maybe his writing on metaethics isn’t in there.
“Works” how, exactly? For example, what are your actual answers to the specific questions I asked about that variant of the scenario?
The thought experiment as I execute it requires me to construct a model of other minds, human or not, that is more detailed than what I would normally think about, and emotionally weight that understanding in order to get a deeper understanding of how important that is. To give an example, it’s possible for me to think about torture and be very decoupled with it and shrug and think “that sucks for the people getting tortured”, but if I think about it more carefully, and imagine my own mental state if I was about to be tortured, then the weight of how extremely fucked up it is becomes very crisp and clear.
Perhaps it was a mistake to use Rawl’s VOI if it also implies other things that I didn’t realize I was invoking, but the way I think of it, every sentient being is actually feeling the valence of everything they’re feeling, and from an impartial perspective the true weight of that is not different from ones own valenced experiences. And if you know that some beings experience extreme negative valence, one strategy to get a deeper understanding of how important that is, is to think about it as if you were going to experience that level of negative valence. No incoherent beings of perfect emptiness required, just the ability to model other minds based on limited evidence, imagine how you would personally react to states across the spectrum of valence, and the ability to scale that according to the distribution of sentient beings in the real world.
And this works on pebblesorters too, although it’s more difficult since we can’t build a concrete model of them beyond what’s given in the story + maybe some assumptions if their neurobiology is at all similar to ours. If an “incorrect” pebble stack gives them negative valence of around the same level that the sound of nails on a chalkboard does for me, then that gives me a rough idea of how important it is to them (in the fictional world). If pebblesorters existed and that was the amount of negative valence caused by an “incorrect” stack, I wouldn’t mess up their stacks any more than I go around scratching chalkboards at people (while wearing earplugs so it doesn’t bother me).
To go back to the master/slave example, if the master truly thought he was about to become a slave, and everything that entails, I’m not convinced he would stick to his guns on how it’s the right order of the universe. I’m sure some people would genuinely be fine with it, but I’m guessing if you actually had a mercenary trying to kidnap and enslave him, he’d start making excuses and trying to get out of it, in a similar way as the one claiming the invisible dragon in their garage will have justifications for why you can’t actually confirm it exists.
In other words, I’m trying to describe a way of making moral views pay rent about the acceptable levels of negative valence in the world. Neither my views, nor the thought experiment I thought I was talking about, depends on disembodied spirits.
Ok… it seems that you totally ignored the question that I asked, in favor of restating a summary of your argument. I guess I appreciate the summary, but it wasn’t actually necessary. The question was not rhetorical; I would like to see your answer to it.
I only see two questions in this line of conversation?
do you have a principled disagreement with all of the arguments for why nothing remotely like this is possible even in principle, or… are you not familiar with them?
I’m not familiar with the specific arguments you’re referring to, but I don’t think it’s actually possible for disembodied minds to exist at all, in the first place. So no I don’t have principled disagreements for those arguments, I have tentative agreement with them.
Another way to put it is that you are asking us (by extending what Rawls is asking us) to perform a mental operation that is something like “imagine that you could have been a chicken instead of a human”. When you ask a question like this, who are you talking to? It is obviously impossible for me—Said Achmiz, the specific person that I am, right now—to have turned out to be a chicken (or, indeed, anyone other than who I am). So you can’t be talking to me (Said Achmiz).
(bold added to highlight your question, which I’m answering) When I ask a question like that, I’m talking to you (or whoever else I’m talking to at the time).
I only see two questions in this line of conversation?
do you have a principled disagreement with all of the arguments for why nothing remotely like this is possible even in principle, or… are you not familiar with them?
I’m not familiar with the specific arguments you’re referring to, but I don’t think it’s actually possible for disembodied minds to exist at all, in the first place. So no I don’t have principled disagreements for those arguments, I have tentative agreement with them.
The “this” in “all of the arguments for why nothing remotely like this is possible even in principle” was referring not to the “disembodied spirits” stuff, but rather to:
With enough time and thought I’m sure I could discuss a wide range of experiences with a wide range of how confident I am at how I’d experience them as a chicken. Even though it would be impossible for me writing this to ever actually experience those things, it’s still easy to take my understanding of the world and apply it in a thought experiment.
And I mentioned Nagel because of this essay (which was by no means the only argument for a position like Nagel’s, just the most famous one).
So it sounds like you’re not familiar with this part of the literature. If that’s so, then I think you’ll find it interesting to delve into it.
Thanks for such an in depth response! I’ll just jump right in. I haven’t deeply proofread this, so please take it with a grain of salt
I’m not trying to frame the veil of ignorance (VOI) as a moral or ethical framework that answers that question. I’m arguing for the VOI as a meta-meta-ethical framework, which grounds the meta-ethical framework of sentientism, which can ground many different object-level frameworks that answer “what is the right thing for me to do”, as long as those object-level frameworks consider all sentient beings as morally relevant.
100% agree with you here.
I agree that nobody can do literally that. I do think that doing your best at that will allow you to be a lot more impartial. Minor nitpick, the imagined disembodied spirit should have desires and interests in the thought experiment, at the very least, the desire to not experience suffering when they’re born.
I agree, in the post I even point out that from behind the veil you could endorse other positions for outside the veil, such as being personally selfish even at others expense. The point of the thought experiment is that thinking about it can help you refine your views on how you think you should act. The point is not to tell you what to do if you find yourself behind the veil of ignorance, which as you say is incoherent.
I’m not following how this rules it out from being an analogy. My understanding of analogies is that they don’t need to be exactly the same for the relevant similarity to help transfer the understanding.
Well, yeah, that is almost exactly what I’m doing! Except generalized to all sentient beings :) I don’t see why you would take so much issue with a question like that? There are many things we don’t (and likely can’t) know about chickens internal experiences, but there’s a lot of very important and useful ground that can be covered from asking that question because there is a lot we can know to a high degree of confidence. If I were asked that, I would look at our understanding of chicken neurology, and how they respond to different drugs like painkillers and pleasurable ones, and our understanding of evolutionary psychology and what kinds of mental patterns would lead to chickens behaving in the ways that they do. I could not give an exact answer, but if I was a chicken I’m almost certain I’d experience positive valence eating corn and fruit and bugs and negative valence if I got hit or I broke a bone or whatever, and that’s just what I’m highly confident on. With enough time and thought I’m sure I could discuss a wide range of experiences with a wide range of how confident I am at how I’d experience them as a chicken. Even though it would be impossible for me writing this to ever actually experience those things, it’s still easy to take my understanding of the world and apply it in a thought experiment.
This I resonate much more with. If someone would genuinely be happy with a coin flip deciding whether they’re master or slave, I don’t think there’s anything I could say to convince them against slavery.
I don’t think they ought to have the moral beliefs of modern westerners. I think I’m probably wrong or confused or misguided about a lot of moral questions, I think probably everyone is, modern westerner or not. The slave question is sillier on the assumption that they don’t want to be slaves, if they’re equally fine with being a slave or master it wouldn’t be very silly of them.
Once again, absolutely agree.
On the pebblesorters question, my interpretation of that story was that we humans do a mind numbing amount of things equally as silly as the pebblesorters. To take just one example, music is just arranging patterns of sound waves in the air in the “right” way, which is no more or less silly than the “right” pebble stacks. Behind the human/chicken/pebblesorter/etc veil, I would argue that all of us look extremely silly! Behind the veil likely wouldn’t care all that much about fairness/justice from behind the veil, beyond how it might impact valence.
I do in fact want to lead towards sentientism. Is it fair to use it in the derogatory way if I’m quite clear and explicit about that? I already described this whole post as an intuition pump before the introduction, I just think that transparent intuition pumps are not just fine, but they can be quite useful and good.
I think you’ve got your meta-levels mixed up. For one thing, there isn’t any such thing as “meta-meta-ethics”; there’s just metaethics, and anything “more meta” than that is still metaethics. For another thing, “sentientism” is definitely object-level ethics; metaethics is about how to reason about ethics—definitionally it cannot include any ethical principles, which “sentientism” clearly is. This really seems like an attempt to sneak in object-level claims by labeling them as “meta-level” considerations.
There is no such thing as “doing your best at” imagining an incoherent scenario. Or, to put it another way, to do your best at doing this is to not do it. That is the best. Any attempt to imagine a scenario which we have already established is incoherent is less than the best. To attempt this is nothing more than to confuse yourself.
This really is a very common sort of mistake, I find.[1] “Doing X is impossible, because X is not coherently defined in the first place.” “But what if we do our best to do X as well as we can?” If you say this, then you have not understood the point: we cannot “do our best at” something which cannot be done at all, because it isn’t “a thing” in the first place. You have words, but those words do not refer to anything.
But this, too, is nonsense. “They” (the disembodied spirits) will not experience anything at all when “they” are born—because “they” will cease to exist as soon as they’re embodied. There is no posited continuity of consciousness, experience, memory, or anything between the disembodied spirit and the incarnated being. (The whole thought experiment really is fractally nonsensical.)
Yes, that claim is what makes it an intuition pump—but as I said, it doesn’t work, because the hypothetical scenario in the thought experiment has no bearing on any situation we could ever encounter in the real world, has no resemblance to any situation we could ever encounter in the real world, etc.
But this isn’t just a case of “not exactly the same”. Nothing approximately like, or even remotely resembling, the hypothetical scenario, actually takes place.
Uh… I’m afraid that nothing in this paragraph is even slightly responsive to the part of my comment that you’re responding to. I’m honestly not sure how it’s even supposed to be, or how it could be. Basically all of those things seem like non sequiturs.
Like the rest of this paragraph, this is non-responsive to my comment, but I am curious: do you have a principled disagreement with all of the arguments for why nothing remotely like this is possible even in principle, or… are you not familiar with them? (Thomas Nagel’s being the most famous one, of course.)
This seems hard to square with the positions you take on all the stuff in your post…
I think there’s some very deep confusion here… are you familiar with Eliezer’s writing on metaethics? (I don’t know whether that would necessarily resolve any of the relevant confusions or disagreements here, but it’s the first thing that comes to mind as a jumping-off point for untangling this.)
One sees it in discussions of utilitarianism, for instance. “Interpersonal comparisons of decision-theoretic utility are incoherent as a concept.” “Can’t we sort of do our best to approximately compare utilities across agents, though?” No, we can’t, because there isn’t anything to approximate. There is no ground truth that we can approach with an estimate. The comparison does not mean anything.
On Yudkowsky, keep in mind he’s written at least two (that I know of) detailed fictions imagining and exploring impossible/incoherent scenarios—HPMOR, and planecrash. I’ve read the former and am partway through reading the latter. If someone says “imagining yourself in a world with magic can help tune your rationality skills,” you certainly could dismiss that by saying that’s an impossible situation so the best you can do is not imagine it, and maybe your rationality skills are already at a level where the exercise would not provide any value. But at least for me, prompts like that and the veil of ignorance are useful for sharpening my thinking on rationality and ethics, respectively
Er… what?
HPMOR is impossible, of course; it’s got outright magic, etc. No argument there. But “incoherent”? How so…?
(As for Planecrash, I do think it’s kind of incoherent in places, but only in a boring literary sense, not in the sense we’re discussing here. But I didn’t read it to the end, so mostly my response to that one is “no comment”.)
If someone says “conclusions about morality reached from considering scenarios in a world with magic hold in actual real-world morality, even if you don’t validate them with reasoning about non-impossible situations”, then I will definitely dismiss that and I will be right to do so. Again: ethics is our attempt to answer the question “what is the right thing for me to do”. Reasoning about situations which are fundamentally impossible (for very strong reasons of outright incoherence, not mere violations of physical laws) cannot constitute a part of that answer.
(Also, yes, I am highly skeptical of “imagining yourself in a world with magic can help tune your rationality skills”. Rationality skills are mostly tuned by doing things. Thinking about things that you’ve done, or things that you have concrete plans to do, or things that other people you know have done, etc., is also useful. Thinking about things that other people you don’t know have done is less useful but might still be useful. Thinking about things that nobody has done or will ever do is very low on the totem pole of “activities that can assist you in honing your rationality skills”.)
Ah I see, yes I did have them mixed up. Thanks for the correction.
On the incoherence of the thought experiment, @neo’s comment explains it pretty well I thought. I will say that I think the thought experiment still works with imaginary minds, like the pebblesorters. If the pebblesorters actually exist and are sentient, then they are morally relevant.
What? In the thought experiment and the real world, a great deal of beings are born into a world that gives rise to a variety of valenced experiences. In the thought experiment, you are tasked with determining whether you would be ok with being the one finding themself in any given one of those lives/experiences.
You said that it is impossible for you to have turned out to be a chicken, and so I can’t be talking to you if I say “imagine that you could have been a chicken instead of a human”. I demonstrated how to imagine that very thing, implying that I could indeed be talking to you when I ask that. I agree that it is impossible for you to turn into a chicken, or for you to have been born a chicken instead of you. I disagree that it is impossible to imagine and make educated guesses on the internal mental states of a chicken.
I’m not following, sorry. Can you give an example of a position I take in the post that’s inconsistent with what I said there?
Maybe? I’ve read the sequences twice, one of those times poring over ~5 posts at a time as part of a book club, but maybe his writing on metaethics isn’t in there. I think we are likely talking past each other, but I’m not sure exactly where the crux is. @neo described what I’m trying to get at pretty well, and I don’t know how to do better, so maybe that can highlight a new avenue of discussion? I do appreciate you taking the time to go into this with me though!
See my response to that comment.
“Works” how, exactly? For example, what are your actual answers to the specific questions I asked about that variant of the scenario?
In the real world, you only ever find yourself being the person who you turned out to be. There is never, ever, under any circumstances whatsoever, any choice you can make in this matter. You come into existence already being a specific person. There is nothing in reality which is even slightly analogous to there being any kind of reasoning entities that exist behind some sort of “veil of ignorance” prior to somehow becoming real.
Ok… it seems that you totally ignored the question that I asked, in favor of restating a summary of your argument. I guess I appreciate the summary, but it wasn’t actually necessary. The question was not rhetorical; I would like to see your answer to it.
I can, but this really seems like a tangent, since it concerns questions like “what beliefs would the disembodied spirits have and why”, which really seems like “how many angels can dance on the head of a pin”, given that the whole “disembodied spirits” concept is so thoroughly nonsensical in the first place. That part of your argument (and of Rawlsian reasoning generally) is an amusing incongruity, but on the whole it’s more of a distraction from the key points than anything.
The Metaethics Sequence (which contains a few posts that didn’t make it into R:AZ a.k.a. “The Sequences” as the term usually meant today) is what you’ll want to check out.
The thought experiment as I execute it requires me to construct a model of other minds, human or not, that is more detailed than what I would normally think about, and emotionally weight that understanding in order to get a deeper understanding of how important that is. To give an example, it’s possible for me to think about torture and be very decoupled with it and shrug and think “that sucks for the people getting tortured”, but if I think about it more carefully, and imagine my own mental state if I was about to be tortured, then the weight of how extremely fucked up it is becomes very crisp and clear.
Perhaps it was a mistake to use Rawl’s VOI if it also implies other things that I didn’t realize I was invoking, but the way I think of it, every sentient being is actually feeling the valence of everything they’re feeling, and from an impartial perspective the true weight of that is not different from ones own valenced experiences. And if you know that some beings experience extreme negative valence, one strategy to get a deeper understanding of how important that is, is to think about it as if you were going to experience that level of negative valence. No incoherent beings of perfect emptiness required, just the ability to model other minds based on limited evidence, imagine how you would personally react to states across the spectrum of valence, and the ability to scale that according to the distribution of sentient beings in the real world.
And this works on pebblesorters too, although it’s more difficult since we can’t build a concrete model of them beyond what’s given in the story + maybe some assumptions if their neurobiology is at all similar to ours. If an “incorrect” pebble stack gives them negative valence of around the same level that the sound of nails on a chalkboard does for me, then that gives me a rough idea of how important it is to them (in the fictional world). If pebblesorters existed and that was the amount of negative valence caused by an “incorrect” stack, I wouldn’t mess up their stacks any more than I go around scratching chalkboards at people (while wearing earplugs so it doesn’t bother me).
To go back to the master/slave example, if the master truly thought he was about to become a slave, and everything that entails, I’m not convinced he would stick to his guns on how it’s the right order of the universe. I’m sure some people would genuinely be fine with it, but I’m guessing if you actually had a mercenary trying to kidnap and enslave him, he’d start making excuses and trying to get out of it, in a similar way as the one claiming the invisible dragon in their garage will have justifications for why you can’t actually confirm it exists.
In other words, I’m trying to describe a way of making moral views pay rent about the acceptable levels of negative valence in the world. Neither my views, nor the thought experiment I thought I was talking about, depends on disembodied spirits.
I only see two questions in this line of conversation?
I’m not familiar with the specific arguments you’re referring to, but I don’t think it’s actually possible for disembodied minds to exist at all, in the first place. So no I don’t have principled disagreements for those arguments, I have tentative agreement with them.
(bold added to highlight your question, which I’m answering) When I ask a question like that, I’m talking to you (or whoever else I’m talking to at the time).
I’ll check it out! and yeah that’s where I read the sequences
The “this” in “all of the arguments for why nothing remotely like this is possible even in principle” was referring not to the “disembodied spirits” stuff, but rather to:
And I mentioned Nagel because of this essay (which was by no means the only argument for a position like Nagel’s, just the most famous one).
So it sounds like you’re not familiar with this part of the literature. If that’s so, then I think you’ll find it interesting to delve into it.