There is no clear or adequate definition of what “[noise] dominated your calculations” means. Maybe you can provide one, but I’ve never seen anyone provide any such definition, or made much headway in doing so.
Creating such a definition of noise has proven to be quite hard, as it’s extremely rare that someone is willing to ignore absolutely all stakes at lower levels of concern or abstraction, no matter the magnitude.
Even if one tries to elevate your family above everything else, it is commonly accepted that it is not moral to sacrifice all of society for just your family, or threaten large scale catastrophe.
Similarly as you elevate the interests of your nation above other things, at a sufficient scale the interests of the rest of the world poke their way into your decision-making in substantial ways again.
Even if you try to do nothing but elevate the interests of animal life, we have still decided that it is not ethical to destroy even fully abiological and definitely not complicated plant-based ecosystems for those interests, if the harm is sufficiently large.
You maybe want to propose we make decisions this way, but humanity absolutely does not generally make decisions this way. When people have to make decisions they usually decide on some rough thresholds for noticing tradeoffs across domains, and indeed decide and re-evaluate how important something is when a decision affects something in a different domain at much larger scale than other decisions.
It doesn’t matter how many shrimp it is.
Look, there are numbers that are very very big.[1]
Again, we are talking about so many shrimp that it would be exceedingly unlike for this number of shrimp, if left under the auspices of gravity, to form their own planets and solar systems and galaxies in which life thrives and which other non-shrimp intelligences form. A number so incomprehensibly big. A galaxy within each atom of our universe made out of shrimp. One can argue it’s meaningless to talk about numbers this big, and while I would dispute that, it’s definitely a much more sensible position than trying to take a confident stance to destroy or substantially alter a set of things so large that it vastly eclipses in complexity and volume and mass and energy all that has ever or will ever exist by a trillion-fold.
Indeed, there are numbers so big that the very presence of specfiying them would encode calculations capable of simulating universes full of healthy and happy humans. The space of numbers is really very large.
One can argue it’s meaningless to talk about numbers this big, and while I would dispute that, it’s definitely a much more sensible position than trying to take a confident stance to destroy or substantially alter a set of things so large that it vastly eclipses in complexity and volume and mass and energy all that has ever or will ever exist by a trillion-fold.
Okay, while I’m hastily backpedaling from the general claims I made, I am interested in your take on the first half of this post. I think there’s a difference between talking about an actual situation, full complexities of the messy reality taken into account, where a supernatural being physically shows up and makes you really decide between a human and 10100 of shrimp, and a thought experiment where “you” “decide” between a “human” and 10100 of “shrimp”. In the second case, my model is that we’re implicitly operating in an abstracted-out setup where the terms in the quotation marks are, essentially, assumed ontologically basic, and matching our intuitive/baseline expectations about what they mean.
While, within the hypothetical, we can still have some uncertainty over e. g. the degree of the internal experiences of those “shrimp”, I think we have to remove considerations like “the shrimp will be deposited into a physical space obeying the laws of physics where their mass may form planets and galaxies” or “with so many shrimp, it’s near-certain that the random twitches of some subset of them would spontaneously implement Boltzmann civilizations of uncountably many happy beings”.
IMO, doing otherwise is a kind of “dodging the hypothetical”, no different from considering it very unlikely that the supernatural being really has control over 10100 of something, and starting to argue about this instead.
While, within the hypothetical, we can still have some uncertainty over e. g. the degree of the internal experiences of those “shrimp”, I think we have to remove considerations like “the shrimp will be deposited into a physical space obeying the laws of physics where their mass may form planets and galaxies” or “with so many shrimp, it’s near-certain that the random twitches of some subset of them would spontaneously implement Boltzmann civilizations of uncountably many happy beings”.
IMO, doing otherwise is a kind of “dodging the hypothetical”, no different from considering it very unlikely that the supernatural being really has control over 10100 of something, and starting to argue about this instead.
I agree there is something to this, but when actually thinking about tradeoffs that do actually have orders of magnitude of variance in them, which is ultimately where this kind of reasoning is most useful (not 100 orders of magnitude, but you know 30-50 are not unheard of), this kind of abstraction would mostly lead you astray, and so I don’t think it’s a good norm for how to take thought experiments like this.
Like, I agree there are versions of the hypothetical that are too removed, but ultimately, I think a central lesson of scope sensitivity is that having a lot more of something often means drastic qualitative changes that come with that drastic change in quantity. Having 10 flop/s of computation is qualitatively different to having 10^10 flop/s. I can easily imagine someone before the onset of modern computing saying “look, how many numbers do you really need to add in everyday life? What is even the plausible purpose of having 10^10 flop/s available? For what purpose would you need to possibly perform 10 billion operations per second? This just seems completely absurd. Clearly the value of a marginal flop goes to zero long before that. That is more operations than all computers[1] in the world have ever ever done, in all of history, combined. What could possibly be the point of this?”
And of course, such a person would be sorely mistaken. And framing the thought experiment as “well, no, I think if you want to take this thought experiment seriously you should think about how much you would be willing to pay for the 10 billionth operation of the kind that you are currently doing, which is clearly zero. I don’t want you to hypothesize some kind of new art forms or applications or computing infrastructure or human culture, which feel like they are not the point of this exercise, I want you to think about the marginal item in isolation” would be pointless. It would be emptying the exercise and tradeoff of any of its meaning. If we ever face a choice like this or, anything remotely like it, of course how the world adapts around this, and the applications that get built for it, and the things that aren’t obvious from when you first asked the question matter.
And to be clear, I think there is also a real conversation going on here about whether maybe, even if you isolated each individual shrimp into a tiny pocket universe, and you had no way of ever seeing them or visiting the great shrimp rift (a natural wonder clearly greater than any natural wonder on earth), and all you knew for sure was that it existed somewhere outside of your sphere of causal influence, and the shrimp never did anything more interesting than current alive shrimp, whether it would still be worth it to kill a human. And for that, I think the answer is less obviously “yes” or “no”, though my guess is the 10^100 causally isolated shrimp ultimate still enrich the universe more than a human would, and are worth preserving more, but it’s less clear.
And we could focus on that if we want to, I am not opposed to it, but it’s not clearly what the OP that sparked this whole thread was talking about, and I find it less illuminating than other tradeoffs, and it would still leave me with a strong reaction that at least the reason for why the answer might be “kill the shrimp” is definitely absolutely different from the answer to why you should not kill a human to allow 7 bees to live for a human lifetime.
I think there is also a real conversation going on here about whether maybe, even if you isolated each individual shrimp into a tiny pocket universe, and you had no way of ever seeing them or visiting the great shrimp rift (a natural wonder clearly greater than any natural wonder on earth), and all you knew for sure was that it existed somewhere outside of your sphere of causal influence, and the shrimp never did anything more interesting than current alive shrimp, whether it would still be worth it to kill a human
Yeah, that’s more what I had in mind. Illusion of transparency, I suppose.
Like, I agree there are versions of the hypothetical that are too removed, but ultimately, I think a central lesson of scope sensitivity is that having a lot more of something often means drastic qualitative changes in what it means to have that thing
Certainly, and it’s an important property of reality. But I don’t think this is what extreme hypotheticals such as the one under discussion actually want to talk about (even if you think this is a more important question to focus on)?
Like, my model is that the 10100 shrimp in this hypothetical are not meant to literally be10100 shrimp. They’re meant to be "10100" “shrimp”. Intuitively, this is meant to stand for something like “a number of shrimp large enough for any value you’re assigning them to become morally relevant”. My interpretation is that the purpose of using a crazy-large number is to elicit that preference with certainty, even if it’s epsilon; not to invite a discussion about qualitative changes in the nature of crazy-large quantities of arbitrary matter.
The hypothetical is interested in shrimp welfare. If we take the above consideration into account, it stops being about “shrimp” at all (see the shrimps-to-rocks move). The abstractions within which the hypothetical is meant to live break.
And yes, if we’re talking about a physical situation involving the number 10100, the abstractions in question really do break under forces this strong, and we have to navigate the situation with the broken abstractions. But in thought-experiment land, we can artificially stipulate those abstractions inviolable (or replace the crazy-high abstraction-breaking number with a very-high but non-abstraction-breaking number).
Like, my model is that the 10100 shrimp in this hypothetical are not meant to literally be10100 shrimp. They’re meant to be "10100" “shrimp”. Intuitively, this is meant to stand for something like “a number of shrimp large enough for any value you’re assigning them to become morally relevant”. My interpretation is that the purpose of using a crazy-large number is to elicit that preference with certainty, even if it’s epsilon; not to invite a discussion about qualitative changes in the nature of crazy-large quantities of arbitrary matter.
I agree that this is a thing people often like to invoke, but it feels to me a lot like people talking about billionaires and not noticing the classical crazy arithmetic errors like:
If Jeff Bezos’ net worth reaches $1 trillion, “he could literally end world poverty and give everyone $1 billion and he will still have $91.5 billion left.”
Like, in those discussions people are almost always trying to invoke numbers like “$1 trillion” as “a number so big that the force of the conclusion must be inevitable”, but like most of the time they just fail because the number isn’t big enough.
If someone was like “man, are you really that confident that a shrimp does not have morally relevant experience that you wouldn’t trade a human for a million shrimp?”, my response is “nope, sorry, 1 million isn’t big enough, that’s just really not that big of a number”. But if you give me a number a trillion trillion trillion trillion trillion trillion trillion trillion times bigger, IDK, yeah, that is a much bigger number.
And correspondingly, for every thought experiment of this kind, I do think there is often a number that will just rip through your assumptions and your tradeoffs. There are just really very very very big numbers.
Like, sure, we all agree our abstraction break here, and I am not confident you can’t find any hardening of abstraction that make the tradeoff come out in the direction of the size of the number really absolutely not mattering at all, but I think that would be a violation of the whole point of the exercise. Like, clearly we can agree that we assign a non-zero value to a marginal shrimp. We value that marginal shrimp for a lot of different reasons, but like, you probably value it for reasons that does include things like the richness of its internal experience, and the degree to which it differs from other shrimp, and the degree to which it contributes to an ecosystem, and the degree to which it’s an interesting object of trade, and all kinds of reasons. Now, if we want to extrapolate that value to 10^100, those things still are there, we can’t just start ignoring them.
Like, I would feel more sympathetic to this simplification if the author of the post was a hardcore naive utilitarian, but they self-identify as a kantian. Kantianism is a highly contextual ethical theory that clearly cares about a bunch of different details of the shrimp, so I don’t get the sense the author wants us to abstract away everything but some supposed “happiness qualia” or “suffering qualia” from the shrimp.
I agree that this is a thing people often like to invoke, but it feels to me a lot like people talking about billionaires and not noticing the classical crazy arithmetic errors like
Isn’t it the opposite? It’s a defence against providing too-low numbers, it’s specifically to ensure that even infinitesimally small preferences are elicited with certainty.
Bundling up all “this seems like a lot” numbers into the same mental bucket, and then failing to recognize when a real number is not actually as high as in your hypothetical, is certainly an error one could make here. But I don’t see an exact correspondence...
In the billionaires case, a thought-experimenter may invoke the hypothetical of “if a wealthy person had enough money to lift everyone out of poverty while still remaining rich, wouldn’t them not doing so be outrageous?”, while inviting the audience to fill-in the definitions of “enough money” and “poverty”. Practical situations might then just fail to match that hypothetical, and innumerate people might fail to recognize that, yes. But this doesn’t mean that that hypothetical is fundamentally useless to reason about, or that it can’t be used to study some specific intuitions/disagreements. (“But there are no rich people with so much money!” kind of maps to “but I did have breakfast!”.)
And in the shrimps case, hypotheticals involving a “very-high but not abstraction-breaking” number of shrimps are a useful tool for discussion/rhetoric. It allows to establish agreement/disagreement on “shrimp experiences have inherent value at all”, a relatively simple question that could serve as a foundation for discussing other, more complicated and contextual ones. (Such as “how much should I value shrimp experiences?” or “but do enough shrimps actually exist to add up to more than a human?” or “but is Intervention X to which I’m asked to donate $5 going to actually prevent five dollars’ worth of shrimp suffering?”.)
Like, I think having a policy of always allowing abstraction breaks would just impoverish the set of thought experiments we would be able consider and use as tools. Tons of different dilemmas would collapse to Pascal’s mugging or whatever.
Like, I would feel more sympathetic to this simplification if the author of the post was a hardcore naive utilitarian, but they self-identify as a kantian. Kantianism is a highly contextual ethical theory that clearly cares about a bunch of different details of the shrimp, so I don’t get the sense the author wants us to abstract away everything but some supposed “happiness qualia” or “suffering qualia” from the shrimp.
Hmm… I think this paragraph at the beginning is what primed me to parse it this way:
Merriam-Webster defines torture as “the infliction of intense pain (as from burning, crushing, or wounding) to punish, coerce, or afford sadistic pleasure.” So I remind the reader that it is part of the second thought experiment that the shrimp are sentient.
Why would we need this assumption[1], if the hypothetical weren’t centrally about the inherent value of the shrimps/shrimp qualia, and the idea that it adds up? The rest of that essay also features no discussion of the contextual value that the existence of a shrimp injects into various diverse environments in which it exists, etc. It just throws the big number around, while comparing the value of shrimps to the value of eating a bag of skittles, after having implicitly justified shrimps having value via shrimps having qualia.
I suppose it’s possible that if I had the full context of the author’s writing in mind, your interpretation would have been obviously correct[2]. But the essay itself reads the opposite way to me.
Why would we need this assumption[1], if the hypothetical weren’t centrally about the inherent value of the shrimps/shrimp qualia, and the idea that it adds up? The rest of that essay also features no discussion of the contextual value that the existence of a shrimp injects into various diverse environments in which it exists, etc. It just throws the big number around, while comparing the value of shrimps to the value of eating a bag of skittles, after having implicitly justified shrimps having value via shrimps having qualia.
I agree probably I implied a bit too much contextualization. Like, I agree the post has a utilitarian bend, but man, I just really don’t buy the whole “let’s add up qualia” as any basis of moral calculation, that I find attempts at trying to create a “pure qualia shrimp” about as confused and meaningless as trying to argue that 7 bees are more important than a human. “qualia” isn’t a thing that exists. The only thing that exists are your values in all of their complexity and godshatteredness. You can’t make a “pure qualia shrimp”, it doesn’t many any philosophical sense, pure qualia isn’t real.
And I agree that maybe the post was imagining some pure qualia juice, and I don’t know, maybe in that case it makes sense to dismiss it by doing a reductio ad absurdum on qualia juice, but I don’t currently buy it. I think that both wouldn’t be engaging with the good parts of the author, and also be kind of a bad step in the discourse (like, the previous step was understanding why it doesn’t make sense for 7 bees to be more important than a human, for a lot of different reasons and very robustly and within that discourse, it’s actually quite important to understand why 10^100 shrimp might actually be more important than a human, under at least a lot of reasonable set of assumptions).
I just really don’t buy the whole “let’s add up qualia” as any basis of moral calculation
Same, honestly. To me, many of these thought experiments seem decoupled from anything practically relevant. But it still seems to me that people often do argue from those abstracted-out frames I’d outlined, and these arguments are probably sometimes useful for establishing at least some agreement on ethics. (I’m not sure how a full-complexity godshatter-on-godshatter argument would even look like (a fistfight, maybe?), and am very skeptical it’d yield any useful results.)
Anyway, it sounds like we mostly figured out what the initial drastic disconnect between our views here was caused by?
Even if one tries to elevate your family above everything else, it is commonly accepted that it is not moral to sacrifice all of society for just your family, or threaten large scale catastrophe.
This just means that “elevate your family above everything else” is not an approved-of moral principle, not that it somehow doesn’t work on its own terms. In any case this is not a problem with multi-tier morality, it’s just a disagreement on what the tiers should be.
Similarly as you elevate the interests of your nation above other things, at a sufficient scale the interests of the rest of the world poke their way into your decision-making in substantial ways again.
This, on the other hand, is a matter of instrumental values, not terminal ones. There is once again no problem here with multi-tier morality.
Even if you try to do nothing but elevate the interests of animal life, we have still decided that it is not ethical to destroy even fully abiological and definitely not complicated plant-based ecosystems for those interests, if the harm is sufficiently large.
Same reply as to the first point. (Also, who has ever advocated so weirdly drawn a moral principle as “do nothing but elevate the interests of animal life”…?)
It doesn’t matter how many shrimp it is.
That is false. The numbers are very big. There are numbers so big that the very presence of specfiying them would encode calculations capable of simulating universes full of healthy and happy humans. It absolutely matters how big this kind of number is.
It doesn’t matter how big the numbers are, because the moral value of shrimp does not aggregate like that. If it were 3^^^3 shrimp, it still wouldn’t matter.
Again, we are talking about so many shrimp that it would be exceedingly unlike for this number of shrimp, if left under the auspices of gravity, to form their own planets and solar systems and galaxies in which life thrives and which other non-shrimp intelligences form.
Now you’re just smuggling in additional hypothesized entities and concerns. Are we talking about shrimp, or about something else? This is basically a red herring.
That aside—no, the numbers really don’t matter, because that’s just not how moral value of shrimp works, in any remotely sensible moral system. A trillion shrimp do not have a million times the moral value of a million shrimp. If your morality says that they do, then your morality is broken.
A trillion shrimp do not have a million times the moral value of a million shrimp. If your morality says that they do, then your morality is broken.
Nobody was saying this! The author of the post in question also does not believe this!
I am not a hedonic utilitarian. I do not think that a trillion shrimp have a million times the moral value of a million shrimp. That is a much much stronger statement than whether there exists any number of shrimp that might be worth more than a human. All you’ve done here is to set up a total strawman that nobody was arguing for and knocked it down.
… 1,000 times the moral value of a million shrimp?
… 10 times the moral value of a million shrimp?
… 1.1 times the moral value of a million shrimp?
… some other multiplicative factor, larger than 1, times the moral value of a million shrimp?
If the answer is “no” to all of these, then that seems like it would mean that you already agree with me, and your previous comments here wouldn’t make any sense. So it seems like the answer has to be “yes” to something in that list.
But then… my response stands, except with the relevant number changed.
On the other hand, you also say:
I am not a hedonic utilitarian.
I… don’t understand how you could be using this term that would make this a meaningful or relevant thing to say in response to my comment. Ok, you’re not a hedonic utilitarian, and thus… what?
Is the point that your claim about saving 10100 shrimp instead of one human isn’t insane… was actually not a moral claim at all, but some other kind of claim (prudential, for instance)? No, that doesn’t seem to work either, because you wrote:
No, being extremely overwhelmingly confident about morality such that even if you are given a choice to drastically alter 99.999999999999999999999% of the matter in the universe, you call the side of not destroying it “insane” for not wanting to give up a single human life, a thing we do routinely for much weaker considerations, is insane.
So clearly this is about morality…
… yeah, I can’t make any sense of what you’re saying here. What am I missing?
… 1,000 times the moral value of a million shrimp?
… 10 times the moral value of a million shrimp?
… 1.1 times the moral value of a million shrimp?
… some other multiplicative factor, larger than 1, times the moral value of a million shrimp?
I don’t know, seems like a very hard question, and I think will be quite sensitive to a bunch of details of the exact comparison. Like, how much cognitive diversity is there among the shrimp? Are the shrimps forming families and complicated social structures, or are they all in an isolated grid? Are they providing value to an extended ecosystem of other life? How rich is the life of these specific shrimp?
I would be surprised if the answer basically ever turned out to be less than 1.1, and surprised if it ever turned out to be more than 10,000.
But then… my response stands, except with the relevant number changed.
I don’t think your response said anything except to claim that a linear relationship between shrimp and values seems to quickly lead to absurd conclusions (or at least that is what I inferred from your claim of saying that a trillion shrimp is not a million times more valuable than a million shrimp). I agree with that as a valid reductio ad absurdum, but given that I see no need for linearity here (simply any ratio, which could even differ with the scale and details of the scenario), I don’t see how your response stands.
… yeah, I can’t make any sense of what you’re saying here. What am I missing?
I have little to go off of besides to repeat myself, as you have given me little to work with besides repeated insistence that what I believe is wrong or absurd. My guess is my meaning is more clear (though probably still far from perfectly clear) to other readers.
I don’t know, seems like a very hard question, and I think will be quite sensitive to a bunch of details of the exact comparison. Like, how much cognitive diversity is there among the shrimp? Are the shrimps forming families and complicated social structures, or are they all in an isolated grid? Are they providing value to an extended ecosystem of other life? How rich is the life of these specific shrimp?
I mean… we know the answers to these questions, right? Like… shrimp are not some sort of… un-studied exotic form of life. (In any case it’s a moot point, see below.)
I would be surprised if the answer basically ever turned out to be less than 1.1, and surprised if it ever turned out to be more than 10,000.
Right, so, “some … multiplicative factor, larger than 1”. That’s what I assumed. Whether that factor is 1 million, or 1.1, really doesn’t make any difference to what I wrote earlier.
I don’t think your response said anything except to claim that a linear relationship between shrimp and values seems to quickly lead to absurd conclusions (or at least that is what I inferred from your claim of saying that a trillion shrimp is not a million times more valuable than a million shrimp). I agree with that as a valid reductio ad absurdum, but given that I see no need for linearity here (simply any ratio, which could even differ with the scale and details of the scenario), I don’t see how your response stands.
No, my point is that any factor at all that is larger than 1, and remains larger than 1 as numbers increase, leads to absurd conclusions. (Like, for example, the conclusion that there is some number of shrimp such that that many shrimp are worth more than a human life.)
Given this correction, do you still think that I’m strawmanning or misunderstanding your views…? (I repeat that linearity is not the target of my objection!)
No, my point is that any factor at all that is larger than 1, and remains larger than 1 as numbers increase
I mean, clearly you agree that two shrimp are more important than one shrimp, and continues to be more important (at least for a while) as the numbers increase. So no, I don’t understand what you are saying, as nothing you have said appears sensitive to any numbers being different, and clearly for small numbers you agree that these comparisons must hold.
I agree there is a number big enough where eventually you approach 1, nothing I have said contradicts that. As in, my guess is the series of the value of shrimp as n goes to infinity does not diverge but eventually converges on some finite number (though especially with considerations like boltzman brains and quantum uncertainty and matter/energy density does seem confusing to think about).
It seems quite likely to me that this point of convergence is above the value of a human life, as numbers can really get very big, there are a lot of humans, and shrimp are all things considered pretty cool and interesting and a lot of shrimp seem like they would give rise to a lot of stuff.
I mean, clearly you agree that two shrimp are more important than one shrimp
Hm… no, I don’t think so. Enough shrimp to ensure that there keep being shrimp—that’s worth more than one shrimp. Less shrimp than that, though—nah.
I agree there is a number big enough where eventually you approach 1, nothing I have said contradicts that. As in, my guess is the series of the value of shrimp as n goes to infinity does not diverge but eventually converge on some finite number, though it does feel kind of confusing to think about.
Sure, this is all fine (and nothing that I have said contradicts you believing this; it seems like you took my objection to be much narrower than it actually was), but you’re saying that this number is much larger than the value of a human life. That’s the thing that I’m objecting to.
I’ll mostly bow out at this point, but one quick clarification:
but you’re saying that this number is much larger than the value of a human life
I didn’t say “much larger”! Like, IDK, my guess is there is some number of shrimp for which its worth sacrificing a thousand humans, which is larger, but not necessarily “much”.
My guess is there is no number, at least in the least convenient world where we are not talking about shrimp galaxies forming alternative life forms, for which it’s worth sacrificing 10 million humans, at least at current population levels and on the current human trajectory.
10 million is just a lot, and humanity has a lot of shit to deal with, and while I think it would be an atrocity to destroy this shrimp-gigaverse, it would also be an atrocity to kill 10 million people, especially intentionally.
There is no clear or adequate definition of what “[noise] dominated your calculations” means. Maybe you can provide one, but I’ve never seen anyone provide any such definition, or made much headway in doing so.
Creating such a definition of noise has proven to be quite hard, as it’s extremely rare that someone is willing to ignore absolutely all stakes at lower levels of concern or abstraction, no matter the magnitude.
Even if one tries to elevate your family above everything else, it is commonly accepted that it is not moral to sacrifice all of society for just your family, or threaten large scale catastrophe.
Similarly as you elevate the interests of your nation above other things, at a sufficient scale the interests of the rest of the world poke their way into your decision-making in substantial ways again.
Even if you try to do nothing but elevate the interests of animal life, we have still decided that it is not ethical to destroy even fully abiological and definitely not complicated plant-based ecosystems for those interests, if the harm is sufficiently large.
You maybe want to propose we make decisions this way, but humanity absolutely does not generally make decisions this way. When people have to make decisions they usually decide on some rough thresholds for noticing tradeoffs across domains, and indeed decide and re-evaluate how important something is when a decision affects something in a different domain at much larger scale than other decisions.
Look, there are numbers that are very very big.[1]
Again, we are talking about so many shrimp that it would be exceedingly unlike for this number of shrimp, if left under the auspices of gravity, to form their own planets and solar systems and galaxies in which life thrives and which other non-shrimp intelligences form. A number so incomprehensibly big. A galaxy within each atom of our universe made out of shrimp. One can argue it’s meaningless to talk about numbers this big, and while I would dispute that, it’s definitely a much more sensible position than trying to take a confident stance to destroy or substantially alter a set of things so large that it vastly eclipses in complexity and volume and mass and energy all that has ever or will ever exist by a trillion-fold.
Indeed, there are numbers so big that the very presence of specfiying them would encode calculations capable of simulating universes full of healthy and happy humans. The space of numbers is really very large.
Okay, while I’m hastily backpedaling from the general claims I made, I am interested in your take on the first half of this post. I think there’s a difference between talking about an actual situation, full complexities of the messy reality taken into account, where a supernatural being physically shows up and makes you really decide between a human and 10100 of shrimp, and a thought experiment where “you” “decide” between a “human” and 10100 of “shrimp”. In the second case, my model is that we’re implicitly operating in an abstracted-out setup where the terms in the quotation marks are, essentially, assumed ontologically basic, and matching our intuitive/baseline expectations about what they mean.
While, within the hypothetical, we can still have some uncertainty over e. g. the degree of the internal experiences of those “shrimp”, I think we have to remove considerations like “the shrimp will be deposited into a physical space obeying the laws of physics where their mass may form planets and galaxies” or “with so many shrimp, it’s near-certain that the random twitches of some subset of them would spontaneously implement Boltzmann civilizations of uncountably many happy beings”.
IMO, doing otherwise is a kind of “dodging the hypothetical”, no different from considering it very unlikely that the supernatural being really has control over 10100 of something, and starting to argue about this instead.
I agree there is something to this, but when actually thinking about tradeoffs that do actually have orders of magnitude of variance in them, which is ultimately where this kind of reasoning is most useful (not 100 orders of magnitude, but you know 30-50 are not unheard of), this kind of abstraction would mostly lead you astray, and so I don’t think it’s a good norm for how to take thought experiments like this.
Like, I agree there are versions of the hypothetical that are too removed, but ultimately, I think a central lesson of scope sensitivity is that having a lot more of something often means drastic qualitative changes that come with that drastic change in quantity. Having 10 flop/s of computation is qualitatively different to having 10^10 flop/s. I can easily imagine someone before the onset of modern computing saying “look, how many numbers do you really need to add in everyday life? What is even the plausible purpose of having 10^10 flop/s available? For what purpose would you need to possibly perform 10 billion operations per second? This just seems completely absurd. Clearly the value of a marginal flop goes to zero long before that. That is more operations than all computers[1] in the world have ever ever done, in all of history, combined. What could possibly be the point of this?”
And of course, such a person would be sorely mistaken. And framing the thought experiment as “well, no, I think if you want to take this thought experiment seriously you should think about how much you would be willing to pay for the 10 billionth operation of the kind that you are currently doing, which is clearly zero. I don’t want you to hypothesize some kind of new art forms or applications or computing infrastructure or human culture, which feel like they are not the point of this exercise, I want you to think about the marginal item in isolation” would be pointless. It would be emptying the exercise and tradeoff of any of its meaning. If we ever face a choice like this or, anything remotely like it, of course how the world adapts around this, and the applications that get built for it, and the things that aren’t obvious from when you first asked the question matter.
And to be clear, I think there is also a real conversation going on here about whether maybe, even if you isolated each individual shrimp into a tiny pocket universe, and you had no way of ever seeing them or visiting the great shrimp rift (a natural wonder clearly greater than any natural wonder on earth), and all you knew for sure was that it existed somewhere outside of your sphere of causal influence, and the shrimp never did anything more interesting than current alive shrimp, whether it would still be worth it to kill a human. And for that, I think the answer is less obviously “yes” or “no”, though my guess is the 10^100 causally isolated shrimp ultimate still enrich the universe more than a human would, and are worth preserving more, but it’s less clear.
And we could focus on that if we want to, I am not opposed to it, but it’s not clearly what the OP that sparked this whole thread was talking about, and I find it less illuminating than other tradeoffs, and it would still leave me with a strong reaction that at least the reason for why the answer might be “kill the shrimp” is definitely absolutely different from the answer to why you should not kill a human to allow 7 bees to live for a human lifetime.
here refering to the human occupancy of “computer” i.e. someone in charge of performing calculations
Yeah, that’s more what I had in mind. Illusion of transparency, I suppose.
Certainly, and it’s an important property of reality. But I don’t think this is what extreme hypotheticals such as the one under discussion actually want to talk about (even if you think this is a more important question to focus on)?
Like, my model is that the 10100 shrimp in this hypothetical are not meant to literally be 10100 shrimp. They’re meant to be "10100" “shrimp”. Intuitively, this is meant to stand for something like “a number of shrimp large enough for any value you’re assigning them to become morally relevant”. My interpretation is that the purpose of using a crazy-large number is to elicit that preference with certainty, even if it’s epsilon; not to invite a discussion about qualitative changes in the nature of crazy-large quantities of arbitrary matter.
The hypothetical is interested in shrimp welfare. If we take the above consideration into account, it stops being about “shrimp” at all (see the shrimps-to-rocks move). The abstractions within which the hypothetical is meant to live break.
And yes, if we’re talking about a physical situation involving the number 10100, the abstractions in question really do break under forces this strong, and we have to navigate the situation with the broken abstractions. But in thought-experiment land, we can artificially stipulate those abstractions inviolable (or replace the crazy-high abstraction-breaking number with a very-high but non-abstraction-breaking number).
I agree that this is a thing people often like to invoke, but it feels to me a lot like people talking about billionaires and not noticing the classical crazy arithmetic errors like:
Like, in those discussions people are almost always trying to invoke numbers like “$1 trillion” as “a number so big that the force of the conclusion must be inevitable”, but like most of the time they just fail because the number isn’t big enough.
If someone was like “man, are you really that confident that a shrimp does not have morally relevant experience that you wouldn’t trade a human for a million shrimp?”, my response is “nope, sorry, 1 million isn’t big enough, that’s just really not that big of a number”. But if you give me a number a trillion trillion trillion trillion trillion trillion trillion trillion times bigger, IDK, yeah, that is a much bigger number.
And correspondingly, for every thought experiment of this kind, I do think there is often a number that will just rip through your assumptions and your tradeoffs. There are just really very very very big numbers.
Like, sure, we all agree our abstraction break here, and I am not confident you can’t find any hardening of abstraction that make the tradeoff come out in the direction of the size of the number really absolutely not mattering at all, but I think that would be a violation of the whole point of the exercise. Like, clearly we can agree that we assign a non-zero value to a marginal shrimp. We value that marginal shrimp for a lot of different reasons, but like, you probably value it for reasons that does include things like the richness of its internal experience, and the degree to which it differs from other shrimp, and the degree to which it contributes to an ecosystem, and the degree to which it’s an interesting object of trade, and all kinds of reasons. Now, if we want to extrapolate that value to 10^100, those things still are there, we can’t just start ignoring them.
Like, I would feel more sympathetic to this simplification if the author of the post was a hardcore naive utilitarian, but they self-identify as a kantian. Kantianism is a highly contextual ethical theory that clearly cares about a bunch of different details of the shrimp, so I don’t get the sense the author wants us to abstract away everything but some supposed “happiness qualia” or “suffering qualia” from the shrimp.
Isn’t it the opposite? It’s a defence against providing too-low numbers, it’s specifically to ensure that even infinitesimally small preferences are elicited with certainty.
Bundling up all “this seems like a lot” numbers into the same mental bucket, and then failing to recognize when a real number is not actually as high as in your hypothetical, is certainly an error one could make here. But I don’t see an exact correspondence...
In the billionaires case, a thought-experimenter may invoke the hypothetical of “if a wealthy person had enough money to lift everyone out of poverty while still remaining rich, wouldn’t them not doing so be outrageous?”, while inviting the audience to fill-in the definitions of “enough money” and “poverty”. Practical situations might then just fail to match that hypothetical, and innumerate people might fail to recognize that, yes. But this doesn’t mean that that hypothetical is fundamentally useless to reason about, or that it can’t be used to study some specific intuitions/disagreements. (“But there are no rich people with so much money!” kind of maps to “but I did have breakfast!”.)
And in the shrimps case, hypotheticals involving a “very-high but not abstraction-breaking” number of shrimps are a useful tool for discussion/rhetoric. It allows to establish agreement/disagreement on “shrimp experiences have inherent value at all”, a relatively simple question that could serve as a foundation for discussing other, more complicated and contextual ones. (Such as “how much should I value shrimp experiences?” or “but do enough shrimps actually exist to add up to more than a human?” or “but is Intervention X to which I’m asked to donate $5 going to actually prevent five dollars’ worth of shrimp suffering?”.)
Like, I think having a policy of always allowing abstraction breaks would just impoverish the set of thought experiments we would be able consider and use as tools. Tons of different dilemmas would collapse to Pascal’s mugging or whatever.
Hmm… I think this paragraph at the beginning is what primed me to parse it this way:
Why would we need this assumption[1], if the hypothetical weren’t centrally about the inherent value of the shrimps/shrimp qualia, and the idea that it adds up? The rest of that essay also features no discussion of the contextual value that the existence of a shrimp injects into various diverse environments in which it exists, etc. It just throws the big number around, while comparing the value of shrimps to the value of eating a bag of skittles, after having implicitly justified shrimps having value via shrimps having qualia.
I suppose it’s possible that if I had the full context of the author’s writing in mind, your interpretation would have been obviously correct[2]. But the essay itself reads the opposite way to me.
A pretty strong one, I think, since “are shrimp qualia of nonzero moral relevance?” is often the very point of many discussions.
Indeed, failing to properly familiarize myself with the discourse and the relevant frames before throwing in hot takes was my main blunder here.
I agree probably I implied a bit too much contextualization. Like, I agree the post has a utilitarian bend, but man, I just really don’t buy the whole “let’s add up qualia” as any basis of moral calculation, that I find attempts at trying to create a “pure qualia shrimp” about as confused and meaningless as trying to argue that 7 bees are more important than a human. “qualia” isn’t a thing that exists. The only thing that exists are your values in all of their complexity and godshatteredness. You can’t make a “pure qualia shrimp”, it doesn’t many any philosophical sense, pure qualia isn’t real.
And I agree that maybe the post was imagining some pure qualia juice, and I don’t know, maybe in that case it makes sense to dismiss it by doing a reductio ad absurdum on qualia juice, but I don’t currently buy it. I think that both wouldn’t be engaging with the good parts of the author, and also be kind of a bad step in the discourse (like, the previous step was understanding why it doesn’t make sense for 7 bees to be more important than a human, for a lot of different reasons and very robustly and within that discourse, it’s actually quite important to understand why 10^100 shrimp might actually be more important than a human, under at least a lot of reasonable set of assumptions).
Same, honestly. To me, many of these thought experiments seem decoupled from anything practically relevant. But it still seems to me that people often do argue from those abstracted-out frames I’d outlined, and these arguments are probably sometimes useful for establishing at least some agreement on ethics. (I’m not sure how a full-complexity godshatter-on-godshatter argument would even look like (a fistfight, maybe?), and am very skeptical it’d yield any useful results.)
Anyway, it sounds like we mostly figured out what the initial drastic disconnect between our views here was caused by?
Yeah, I think so, though not sure. But I feel good stopping here.
This just means that “elevate your family above everything else” is not an approved-of moral principle, not that it somehow doesn’t work on its own terms. In any case this is not a problem with multi-tier morality, it’s just a disagreement on what the tiers should be.
This, on the other hand, is a matter of instrumental values, not terminal ones. There is once again no problem here with multi-tier morality.
Same reply as to the first point. (Also, who has ever advocated so weirdly drawn a moral principle as “do nothing but elevate the interests of animal life”…?)
It doesn’t matter how big the numbers are, because the moral value of shrimp does not aggregate like that. If it were 3^^^3 shrimp, it still wouldn’t matter.
Now you’re just smuggling in additional hypothesized entities and concerns. Are we talking about shrimp, or about something else? This is basically a red herring.
That aside—no, the numbers really don’t matter, because that’s just not how moral value of shrimp works, in any remotely sensible moral system. A trillion shrimp do not have a million times the moral value of a million shrimp. If your morality says that they do, then your morality is broken.
Nobody was saying this! The author of the post in question also does not believe this!
I am not a hedonic utilitarian. I do not think that a trillion shrimp have a million times the moral value of a million shrimp. That is a much much stronger statement than whether there exists any number of shrimp that might be worth more than a human. All you’ve done here is to set up a total strawman that nobody was arguing for and knocked it down.
Ok. Do you think that a trillion shrimp have:
… 1,000 times the moral value of a million shrimp?
… 10 times the moral value of a million shrimp?
… 1.1 times the moral value of a million shrimp?
… some other multiplicative factor, larger than 1, times the moral value of a million shrimp?
If the answer is “no” to all of these, then that seems like it would mean that you already agree with me, and your previous comments here wouldn’t make any sense. So it seems like the answer has to be “yes” to something in that list.
But then… my response stands, except with the relevant number changed.
On the other hand, you also say:
I… don’t understand how you could be using this term that would make this a meaningful or relevant thing to say in response to my comment. Ok, you’re not a hedonic utilitarian, and thus… what?
Is the point that your claim about saving 10100 shrimp instead of one human isn’t insane… was actually not a moral claim at all, but some other kind of claim (prudential, for instance)? No, that doesn’t seem to work either, because you wrote:
So clearly this is about morality…
… yeah, I can’t make any sense of what you’re saying here. What am I missing?
I don’t know, seems like a very hard question, and I think will be quite sensitive to a bunch of details of the exact comparison. Like, how much cognitive diversity is there among the shrimp? Are the shrimps forming families and complicated social structures, or are they all in an isolated grid? Are they providing value to an extended ecosystem of other life? How rich is the life of these specific shrimp?
I would be surprised if the answer basically ever turned out to be less than 1.1, and surprised if it ever turned out to be more than 10,000.
I don’t think your response said anything except to claim that a linear relationship between shrimp and values seems to quickly lead to absurd conclusions (or at least that is what I inferred from your claim of saying that a trillion shrimp is not a million times more valuable than a million shrimp). I agree with that as a valid reductio ad absurdum, but given that I see no need for linearity here (simply any ratio, which could even differ with the scale and details of the scenario), I don’t see how your response stands.
I have little to go off of besides to repeat myself, as you have given me little to work with besides repeated insistence that what I believe is wrong or absurd. My guess is my meaning is more clear (though probably still far from perfectly clear) to other readers.
I mean… we know the answers to these questions, right? Like… shrimp are not some sort of… un-studied exotic form of life. (In any case it’s a moot point, see below.)
Right, so, “some … multiplicative factor, larger than 1”. That’s what I assumed. Whether that factor is 1 million, or 1.1, really doesn’t make any difference to what I wrote earlier.
No, my point is that any factor at all that is larger than 1, and remains larger than 1 as numbers increase, leads to absurd conclusions. (Like, for example, the conclusion that there is some number of shrimp such that that many shrimp are worth more than a human life.)
Given this correction, do you still think that I’m strawmanning or misunderstanding your views…? (I repeat that linearity is not the target of my objection!)
I mean, clearly you agree that two shrimp are more important than one shrimp, and continues to be more important (at least for a while) as the numbers increase. So no, I don’t understand what you are saying, as nothing you have said appears sensitive to any numbers being different, and clearly for small numbers you agree that these comparisons must hold.
I agree there is a number big enough where eventually you approach 1, nothing I have said contradicts that. As in, my guess is the series of the value of shrimp as n goes to infinity does not diverge but eventually converges on some finite number (though especially with considerations like boltzman brains and quantum uncertainty and matter/energy density does seem confusing to think about).
It seems quite likely to me that this point of convergence is above the value of a human life, as numbers can really get very big, there are a lot of humans, and shrimp are all things considered pretty cool and interesting and a lot of shrimp seem like they would give rise to a lot of stuff.
Hm… no, I don’t think so. Enough shrimp to ensure that there keep being shrimp—that’s worth more than one shrimp. Less shrimp than that, though—nah.
Sure, this is all fine (and nothing that I have said contradicts you believing this; it seems like you took my objection to be much narrower than it actually was), but you’re saying that this number is much larger than the value of a human life. That’s the thing that I’m objecting to.
I’ll mostly bow out at this point, but one quick clarification:
I didn’t say “much larger”! Like, IDK, my guess is there is some number of shrimp for which its worth sacrificing a thousand humans, which is larger, but not necessarily “much”.
My guess is there is no number, at least in the least convenient world where we are not talking about shrimp galaxies forming alternative life forms, for which it’s worth sacrificing 10 million humans, at least at current population levels and on the current human trajectory.
10 million is just a lot, and humanity has a lot of shit to deal with, and while I think it would be an atrocity to destroy this shrimp-gigaverse, it would also be an atrocity to kill 10 million people, especially intentionally.