Edit: Nevermind, evidently I’ve not thought this through properly. I’m retracting the below.
The naïve formulations of utilitarianism assume that all possible experiences can be mapped to scalar utilities lying on the same,continuous spectrum, and that experiences’ utility is additive. I think that’s an error.
This is how we get the frankly insane conclusions like “you should save 10100 shrimps instead of one human” or everyone’s perennial favorite, “if you’re choosing between one person getting tortured for 50 years or some amount of people N getting a dust speck into their eye, there must be an N big enough that the torture is better”. I disagree with those. I would sacrifice an arbitrarily large amount of shrimps to save one human, and there’s no N big enough for me to pick torture. I don’t care if that disagrees with what the math says: if the math says something else, it’s the wrong math and we should find a better one.
Here’s a sketch of how I think such better math might look like:
There’s a totally ordered, discrete set of “importance tiers” of experiences.
Within tiers, the utilities are additive: two people getting dust specks is twice as bad as one person getting dust-speck’d, two people being tortured is twice as bad as one person being tortured, eating a chocolate bar twice per week is twice as good as eating a bar once per week, etc.
Across tiers, the tier ordering dominates: if we’re comparing some experience belonging to a higher tier to any combination of experiences from lower tiers, the only relevant consideration is the sign of the higher-tier experience. No amount of people getting dust-speck’d, and no amount of dust-speck events sparsely distributed throughout one person’s life, can ever add up to anything as important as a torture-level experience.[1]
Intuitively, tiers correspond to the size of effect a given experience has on a person’s life:
A dust speck is a minor inconvenience that is forgotten after a second. If we zoom out and consider the person’s life at a higher level, say on the scale of a day, this experience rounds off exactly to zero, rather than to an infinitesimally small but nonzero value. (Again, on the assumption of a median dust-speck event, no emergent or butterfly effects.)
Getting yelled at by someone might ruin the entirety of someone’s day, but is unlikely to meaningfully change the course of their life. Experiences on this tier are more important than any amount of dust-speck experiences, but any combination of them rounds down to zero from a life’s-course perspective.
Getting tortured is likely to significantly traumatize someone, to have a lasting negative impact on their life. Experiences at this tier ought to dominate getting-yelled-at as much as getting-yelled-at dominates dust specks.
Physically, those “importance tiers” probably fall out of the hierarchy of natural abstractions. Like everything else, a person’s life has different levels of organization. Any detail in how the high-level life-history goes is incomparably more important than any experience which is only locally relevant (which fails to send long-distance ripples throughout the person’s life). Butterfly effects are then the abstraction leaks (low-level events that perturb high-level dynamics), etc.
I didn’t spend much time thinking about this, so there may be some glaring holes here, but this already fits my intuitions much better.
I think we can expand that framework to cover “tiers of sentience”:
If shrimps have qualia, it might be that any qualia they’re capable of experiencing belong to lower-importance tiers, compared to the highest-tier human qualia.
Simultaneously, it might be the case that the highest-importance shrimp qualia are on the level of the lower-importance human qualia.
Thus, it might be reasonable to sacrifice the experience of eating a chocolate bar to save 10100 shrimps, even if you’d never sacrifice a person’s life (or even make someone cry) to save any amount of shrimps.
This makes some intuitive sense, I think. The model above assumes that “local” experiences, which have no impact on the overarching pattern of a person’s life, are arbitrarily less important than that overarching pattern. What if we’re dealing with beings whose internal lives have no such overarching patterns, then? A shrimp’s interiority is certainly less complex than that of a human, so it seems plausible that its life-experience lacks comparably rich levels of organization (something like “the ability to experience what is currently happening as part of the tapestry spanning its entire life, rather than as an isolated experience”). So all of its qualia would be comparable only with the “local” experiences of a human, for some tier of locality: we would have direct equivalence between them.
One potential issue here is that this implies the existence of utility monsters: some divine entities such that they can have experiences incomparably more important than any experience a human can have. I guess it’s possible that if I understood qualia better, I would agree with that, but this seems about as anti-intuitive as “shrimps matter as much as humans”. My intuition is that sapient entities top the hierarchy of moral importance, that there’s nothing meaningfully “above” them. So that’s an issue.
One potential way to deal with this is to suggest that what distinguishes sapient/”generally intelligent” entities is not that they’re the only entities whose experiences matter, but that they have the ability to (learn to) have experiences of arbitrarily high tiers. And indeed: the whole shtick of “general intelligence” is that it should allow you to learn and reason about arbitrarily complicated systems of abstraction/multi-level organization. If the importance tiers of experiences really have something to do with the richness of the organization of the entity’s inner life, this resolves things neatly. Now:
Non-sapient entities may have experiences of nonzero importance.
No combination of non-sapient experiences can compare to the importance of a sapient entity’s life.
“Is sapient” tops the hierarchy of moral relevance: there’s no type of entity that is fundamentally “above”.
Two caveats here are butterfly effects and emergent importance:
Getting a dust speck at the wrong moment might kill you (if you’re operating dangerous machinery) or change the trajectory of your life (if this minor inconvenience is the last straw that triggers a career-ruining breakdown). We have to assume such possibilities away: the experiences exist “in a vacuum”. Doing otherwise would violate the experimental setup, dragging in various practical considerations, instead of making it purely about ethics.
So we assume that each dust-speck event always has the “median” amount of impact on a person’s life, even if you scale the amount of dust-speck events arbitrarily.
Getting 1000 dust specks one after another adds up to something more than 1000 single-dustspeck experiences; it’s worse than getting a dust speck once per day for 1000 days. More intuitively, experiencing a 10⁄10 pain for one millisecond is not comparable to experiencing 10⁄10 pain for 10 minutes. There are emergent effects at play, and like with butterflies, we must assume them away for experimental purity.
So if we’re talking about M experiences from within the same importance tier, they’re assumed to be distributed such that they don’t add up to a higher-tier experience.
Note that those are very artificial conditions. In real life, both of those are very much in play. Any lower-tier experience has a chance of resulting into a higher-tier experience, and every higher-tier experience emerges from (appropriately distributed) lower-tier experiences. In our artificial setup, we’re assuming certain knowledge that no butterfly effects would occur, and that a lower-tier event contributes to no higher-tier pattern.
Relevance: There’s reasoning that goes, “if you ever drive to the store to get a chocolate bar, you’re risking crashing into and killing someone, therefore you don’t value people’s lives infinitely more than eating chocolate”. I reject it on the above grounds. Systematically avoiding all situations where you’re risking someone’s life in exchange for a low-importance experience would assemble into a high-importance life-ruining experience for you (starving to death in your apartment, I guess?). Given that, we’re now comparing same-tier experiences, and here I’m willing to be additive, calculating that killing a person with very low probability is better than killing yourself (by a thousand cuts) with certainty.
Besides uncertainty, there’s the problem of needing to pick cutoffs between tiers in a ~continuous space of ‘how much effect does this have on a person’s life?’, with things slightly on one side or the other of a cutoff being treated very differently.
Intuitively, tiers correspond to the size of effect a given experience has on a person’s life:
I agree with the intuition that this is important, but I think that points toward just rejecting utilitarianism (as in utility-as-a-function-purely-of-local-experiences, not consequentialism).
It’s worth noting that everything funges: some large number of experiences of eating a chocolate bar can be exchanged for avoiding extreme human suffering or death. So, if you lexicographically put higher weight on extreme human suffering or death, then you’re willing to make extreme tradeoffs (e.g.1030 chocolate bar experiences) in terms of mundane utility for saving a single life. I think this easily leads to extremely unintuitive conclusions, e.g. you shouldn’t ever be willing to drive to a nice place. See also Trading off Lives.
I find your response to this sort of argument under “Relevance: There’s reasoning that goes” in the footnote very uncompelling as it doesn’t apply to marginal impacts.
Relevance: There’s reasoning that goes, “if you ever drive to the store to get a chocolate bar, you’re risking crashing into and killing someone, therefore you don’t value people’s lives infinitely more than eating chocolate”. I reject it on the above grounds. Systematically avoiding all situations where you’re risking someone’s life in exchange for a low-importance experience would assemble into a high-importance life-ruining experience for you (starving to death in your apartment, I guess?). Given that, we’re now comparing same-tier experiences, and here I’m willing to be additive, calculating that killing a person with very low probability is better than killing yourself (by a thousand cuts) with certainty.
Ok, but if you don’t drive to the store one day to get your chocolate, then that is not a major pain for you, yes? Why not just decide that next time you want chocolate at the store, you’re not going to go out and get it because you may run over a pedestrian? Your decision there doesn’t need to impact your other decisions.
Then you ought to keep on making that choice until you are right on the edge of those choices adding up to a first-tier experience, but certainly below.
This logic generalizes. You will always be pushing the lower tiers of experience as low as they can go before they enter the upper-tiers of experience. I think the fact that your paragraph above is clearly motivated reasoning here (instead of “how can I actually get the most bang for my buck within this moral theory” style reasoning) shows that you agree with me (and many others) that this is flawed.
Systematically avoiding all situations where you’re risking someone’s life in exchange for a low-importance experience would assemble into a high-importance life-ruining experience for you (starving to death in your apartment, I guess?).
We can easily ban speed above 15km/h for any vehicles except ambulances. Nobody starves to death in this scenario, it’s just very inconvenient. We value convenience lost in this scenario more than lives lost in our reality, so we don’t ban high-speed vehicles.
Ordinal preferences are bad and insane and they are to be avoided.
What’s really wrong with utilitarianism is that you can’t, actually, sum utilities: it’s a type error, because utilities are invariant up to affine transform, what would their sum mean?
The problem, I think, that humans naturally conflate two types of altruism. First type is caring about other entities mental state. Second type is “game-theoretic” or “alignment-theoretic” altruism: generalized notion of what does that mean to care about someone else’s values. Roughly, I think that good type of the second type of altruism requires you to do fair bargaining in interests of entity you are being altruistic towards.
Let’s take “World Z” thought experiment. The problem from the second type altruism perspective is that total utilitarian gets very large utility from this world, while all inhabitants of this world, by premise, get very small utility per person, which is unfair division of gains.
One may object: why not create entities who think that very small share of gains is fair? My answer is that if entity can be satisfied with infinitesimal share of gains, it also can be satisfied with infinitesimal share of anthropic measure, i.e., non-existence, and it’s more altruistic to look for more demanding entities to fill universe with.
My general problem with animal welfare from bargaining perspective is that most of animals probably don’t have sufficient agency to have any sort of representative in bargaining. We can imagine CEV of shrimp which is negative utilitarian and wants to kill all shrimp, or positive utilitarian which thinks that even very painful existence is worth it, or CEV that prefers shrimp swimming in heroin, or something human-like, or something totally alien, and sum of these guesses probably sums up to “do not torture and otherwise do as you please”.
This is how we get the frankly insane conclusions like you should save 10100 shrimps instead of one human
Huh, I expected better from you.
No, it is absolutely not insane to save 10100 shrimp instead of one human! I think the case for insanity for the opposite is much stronger! Please, actually think about how big 10100 is. We are talking about more shrimp than atoms in the universe. Trillions upon trillions of shrimp more than atoms in the universe.
This is a completely different kind of statement than “you should trade of seven bees against a human”.
No, being extremely overwhelmingly confident about morality such that even if you are given a choice to drastically alter 99.999999999999999999999% of the matter in the universe, you call the side of not destroying it “insane” for not wanting to give up a single human life, a thing we do routinely for much weaker considerations, is insane.
The whole “tier” thing obviously fails. You always end up dominated by spurious effects on the highest tier. In a universe with any appreciable uncertainty you basically just ignore any lower tiers, because you can always tell some causal story of how your actions might infinitesimally affect something, and so you completely ignore it. You might as well just throw away all morality except the highest tier, it will never change any of your actions.
I’m normally in favor of high decoupling, but this thought experiment seems to take it well beyond the point of absurdity. If I somehow found myself in control of the fate of 10^100 shrimp, the first thing I’d want to do is figure out where I am and what’s going on, since I’m clearly no longer in the universe I’m familiar with.
Yeah, I mean, that also isn’t a crazy response. I think being like “what would it even mean to have 10^30 times more shrimp than atoms? Really seems like my whole ontology about the world must be really confused” also seems fine. My objection is mostly to “it’s obvious you kill the shrimp to save the human”.
what would it even mean to have 10^30 times more shrimp than atoms?
Oh, easy, it just implies you’re engaging in acausal trade with a godlike entity residing in some universe dramatically bigger than this one. This interpretation introduces no additional questions or complications whatsoever.
No, being extremely overwhelmingly confident about morality such that even if you are given a choice to drastically alter 99.999999999999999999999% of the matter in the universe, you call the side of not destroying it “insane” for not wanting to give up a single human life, a thing we do routinely for much weaker considerations, is insane.
Hm. Okay, so my reasoning there went as follows:
Substitute shrimp for rocks.10100 rocks would also be an amount of matter bigger than exists in the observable universe, and we presumably should assign a nonzero probability to rocks being sapient. Should we then save 10100 rocks instead of one human?
Perhaps. But I think this transforms the problem into Pascal’s mugging, and has nothing to do with shrimp or ethics anymore. If we’re allowed to drag in outside considerations like this, we should also start questioning whether these 10100 rocks/shrimp actually exist, and all other usual arguments against Pascal’s mugging.
To properly engage with thought experiments within some domain, like ethics, we should take the assumptions behind this domain as a given. This implicitly means constraining our hypothesis space to models of reality within which this domain is a meaningful thing to reason about.
In this case, this would involve being able to reason about 10100 rocks as if they really were just “rocks”, without dragging in the uncertainty about “but what if my very conception of what a ‘rock’ is is metaphysically confused?”.
Similarly, surely we should be able to have thought experiments in which “shrimp” really are just “shrimp”, ontologically basic entities that are not made up of matter which can spontaneously assemble into Boltzmann brains or whatever.
“Shrimp” being a type of system that could implement qualia as valuable as that of humans seems overwhelmingly unlikely to me, not as unlikely as “rocks have human-level qualia”, but in the same reference class. Therefore, in the abstract thought-experiment setup in which I have no uncertainty regarding the ontological nature of shrimp, it’s reasonable to argue that no amount of them compares to a human life.
I’m not sure where you’d get off this train, but I assume the last bullet-point would do this? I. e., that you would argue that holding the possibility of shrimps having human-level qualia is salient in a way it’s not for rocks?
Yeah, that seems valid. I might’ve shot from the hip on that one.
The whole “tier” thing obviously fails. You always end up dominated by spurious effects on the highest tier
I have a story for how that would make sense, similarly involving juggling inside-model and outside-model reasoning, but, hm, I’m somehow getting the impression my thinking here is undercooked/poorly presented. I’ll revisit that one at a later time.
Edit: Incidentally, any chance the UI for retracting a comment could be modified? I have two suggestions here:
I’d like to be able to list a retraction reason, ideally at the top of the comment.
The crossing-out thing makes it difficult to read the comment afterwards, and some people might want to be able to do that. Perhaps it’s better to automatically put the contents into a collapsible instead, or something along those lines?
Edit: Incidentally, any chance the UI for retracting a comment could be modified? I have two suggestions here:
You should be able to strike out the text manually and get the same-ish effect, or leave a retraction notice. The text being hard to read is intentional so that it really cannot be the case that someone screenshots it or skims it without noticing that it is retracted.
No, it is absolutely not insane to save 10100 shrimp instead of one human! It would be insane to do the opposite. Please, actually think about how big 10100 is. We are talking about more shrimp than atoms in the universe. Trillions upon trillions of shrimp more than atoms in the universe.
I think it pretty clearly is insane to save 10100 shrimp instead of one human! It doesn’t matter how many shrimp it is. The moral value of shrimp does not aggregate like that.
The grandparent comment is obviously correct in its description of the problem. (Whether the proposed solution works is another question entirely.)
The whole “tier” thing obviously fails. You always end up dominated by spurious effects on the highest tier.
That’s just not true.
One obvious approach is: once you get to the point where noise (i.e., non-systematic error) dominates your calculations for a particular tier, you ignore that tier and consider the next lower tier. (This is also approximately equivalent to a sort of reasoning which we use all the time, and it works pretty straightforwardly, without giving rise to the sorts of pathologies you allude to.)
There is no clear or adequate definition of what “[noise] dominated your calculations” means. Maybe you can provide one, but I’ve never seen anyone provide any such definition, or made much headway in doing so.
Creating such a definition of noise has proven to be quite hard, as it’s extremely rare that someone is willing to ignore absolutely all stakes at lower levels of concern or abstraction, no matter the magnitude.
Even if one tries to elevate your family above everything else, it is commonly accepted that it is not moral to sacrifice all of society for just your family, or threaten large scale catastrophe.
Similarly as you elevate the interests of your nation above other things, at a sufficient scale the interests of the rest of the world poke their way into your decision-making in substantial ways again.
Even if you try to do nothing but elevate the interests of animal life, we have still decided that it is not ethical to destroy even fully abiological and definitely not complicated plant-based ecosystems for those interests, if the harm is sufficiently large.
You maybe want to propose we make decisions this way, but humanity absolutely does not generally make decisions this way. When people have to make decisions they usually decide on some rough thresholds for noticing tradeoffs across domains, and indeed decide and re-evaluate how important something is when a decision affects something in a different domain at much larger scale than other decisions.
It doesn’t matter how many shrimp it is.
Look, there are numbers that are very very big.[1]
Again, we are talking about so many shrimp that it would be exceedingly unlike for this number of shrimp, if left under the auspices of gravity, to form their own planets and solar systems and galaxies in which life thrives and which other non-shrimp intelligences form. A number so incomprehensibly big. A galaxy within each atom of our universe made out of shrimp. One can argue it’s meaningless to talk about numbers this big, and while I would dispute that, it’s definitely a much more sensible position than trying to take a confident stance to destroy or substantially alter a set of things so large that it vastly eclipses in complexity and volume and mass and energy all that has ever or will ever exist by a trillion-fold.
Indeed, there are numbers so big that the very presence of specfiying them would encode calculations capable of simulating universes full of healthy and happy humans. The space of numbers is really very large.
One can argue it’s meaningless to talk about numbers this big, and while I would dispute that, it’s definitely a much more sensible position than trying to take a confident stance to destroy or substantially alter a set of things so large that it vastly eclipses in complexity and volume and mass and energy all that has ever or will ever exist by a trillion-fold.
Okay, while I’m hastily backpedaling from the general claims I made, I am interested in your take on the first half of this post. I think there’s a difference between talking about an actual situation, full complexities of the messy reality taken into account, where a supernatural being physically shows up and makes you really decide between a human and 10100 of shrimp, and a thought experiment where “you” “decide” between a “human” and 10100 of “shrimp”. In the second case, my model is that we’re implicitly operating in an abstracted-out setup where the terms in the quotation marks are, essentially, assumed ontologically basic, and matching our intuitive/baseline expectations about what they mean.
While, within the hypothetical, we can still have some uncertainty over e. g. the degree of the internal experiences of those “shrimp”, I think we have to remove considerations like “the shrimp will be deposited into a physical space obeying the laws of physics where their mass may form planets and galaxies” or “with so many shrimp, it’s near-certain that the random twitches of some subset of them would spontaneously implement Boltzmann civilizations of uncountably many happy beings”.
IMO, doing otherwise is a kind of “dodging the hypothetical”, no different from considering it very unlikely that the supernatural being really has control over 10100 of something, and starting to argue about this instead.
While, within the hypothetical, we can still have some uncertainty over e. g. the degree of the internal experiences of those “shrimp”, I think we have to remove considerations like “the shrimp will be deposited into a physical space obeying the laws of physics where their mass may form planets and galaxies” or “with so many shrimp, it’s near-certain that the random twitches of some subset of them would spontaneously implement Boltzmann civilizations of uncountably many happy beings”.
IMO, doing otherwise is a kind of “dodging the hypothetical”, no different from considering it very unlikely that the supernatural being really has control over 10100 of something, and starting to argue about this instead.
I agree there is something to this, but when actually thinking about tradeoffs that do actually have orders of magnitude of variance in them, which is ultimately where this kind of reasoning is most useful (not 100 orders of magnitude, but you know 30-50 are not unheard of), this kind of abstraction would mostly lead you astray, and so I don’t think it’s a good norm for how to take thought experiments like this.
Like, I agree there are versions of the hypothetical that are too removed, but ultimately, I think a central lesson of scope sensitivity is that having a lot more of something often means drastic qualitative changes that come with that drastic change in quantity. Having 10 flop/s of computation is qualitatively different to having 10^10 flop/s. I can easily imagine someone before the onset of modern computing saying “look, how many numbers do you really need to add in everyday life? What is even the plausible purpose of having 10^10 flop/s available? For what purpose would you need to possibly perform 10 billion operations per second? This just seems completely absurd. Clearly the value of a marginal flop goes to zero long before that. That is more operations than all computers[1] in the world have ever ever done, in all of history, combined. What could possibly be the point of this?”
And of course, such a person would be sorely mistaken. And framing the thought experiment as “well, no, I think if you want to take this thought experiment seriously you should think about how much you would be willing to pay for the 10 billionth operation of the kind that you are currently doing, which is clearly zero. I don’t want you to hypothesize some kind of new art forms or applications or computing infrastructure or human culture, which feel like they are not the point of this exercise, I want you to think about the marginal item in isolation” would be pointless. It would be emptying the exercise and tradeoff of any of its meaning. If we ever face a choice like this or, anything remotely like it, of course how the world adapts around this, and the applications that get built for it, and the things that aren’t obvious from when you first asked the question matter.
And to be clear, I think there is also a real conversation going on here about whether maybe, even if you isolated each individual shrimp into a tiny pocket universe, and you had no way of ever seeing them or visiting the great shrimp rift (a natural wonder clearly greater than any natural wonder on earth), and all you knew for sure was that it existed somewhere outside of your sphere of causal influence, and the shrimp never did anything more interesting than current alive shrimp, whether it would still be worth it to kill a human. And for that, I think the answer is less obviously “yes” or “no”, though my guess is the 10^100 causally isolated shrimp ultimate still enrich the universe more than a human would, and are worth preserving more, but it’s less clear.
And we could focus on that if we want to, I am not opposed to it, but it’s not clearly what the OP that sparked this whole thread was talking about, and I find it less illuminating than other tradeoffs, and it would still leave me with a strong reaction that at least the reason for why the answer might be “kill the shrimp” is definitely absolutely different from the answer to why you should not kill a human to allow 7 bees to live for a human lifetime.
I think there is also a real conversation going on here about whether maybe, even if you isolated each individual shrimp into a tiny pocket universe, and you had no way of ever seeing them or visiting the great shrimp rift (a natural wonder clearly greater than any natural wonder on earth), and all you knew for sure was that it existed somewhere outside of your sphere of causal influence, and the shrimp never did anything more interesting than current alive shrimp, whether it would still be worth it to kill a human
Yeah, that’s more what I had in mind. Illusion of transparency, I suppose.
Like, I agree there are versions of the hypothetical that are too removed, but ultimately, I think a central lesson of scope sensitivity is that having a lot more of something often means drastic qualitative changes in what it means to have that thing
Certainly, and it’s an important property of reality. But I don’t think this is what extreme hypotheticals such as the one under discussion actually want to talk about (even if you think this is a more important question to focus on)?
Like, my model is that the 10100 shrimp in this hypothetical are not meant to literally be10100 shrimp. They’re meant to be "10100" “shrimp”. Intuitively, this is meant to stand for something like “a number of shrimp large enough for any value you’re assigning them to become morally relevant”. My interpretation is that the purpose of using a crazy-large number is to elicit that preference with certainty, even if it’s epsilon; not to invite a discussion about qualitative changes in the nature of crazy-large quantities of arbitrary matter.
The hypothetical is interested in shrimp welfare. If we take the above consideration into account, it stops being about “shrimp” at all (see the shrimps-to-rocks move). The abstractions within which the hypothetical is meant to live break.
And yes, if we’re talking about a physical situation involving the number 10100, the abstractions in question really do break under forces this strong, and we have to navigate the situation with the broken abstractions. But in thought-experiment land, we can artificially stipulate those abstractions inviolable (or replace the crazy-high abstraction-breaking number with a very-high but non-abstraction-breaking number).
Like, my model is that the 10100 shrimp in this hypothetical are not meant to literally be10100 shrimp. They’re meant to be "10100" “shrimp”. Intuitively, this is meant to stand for something like “a number of shrimp large enough for any value you’re assigning them to become morally relevant”. My interpretation is that the purpose of using a crazy-large number is to elicit that preference with certainty, even if it’s epsilon; not to invite a discussion about qualitative changes in the nature of crazy-large quantities of arbitrary matter.
I agree that this is a thing people often like to invoke, but it feels to me a lot like people talking about billionaires and not noticing the classical crazy arithmetic errors like:
If Jeff Bezos’ net worth reaches $1 trillion, “he could literally end world poverty and give everyone $1 billion and he will still have $91.5 billion left.”
Like, in those discussions people are almost always trying to invoke numbers like “$1 trillion” as “a number so big that the force of the conclusion must be inevitable”, but like most of the time they just fail because the number isn’t big enough.
If someone was like “man, are you really that confident that a shrimp does not have morally relevant experience that you wouldn’t trade a human for a million shrimp?”, my response is “nope, sorry, 1 million isn’t big enough, that’s just really not that big of a number”. But if you give me a number a trillion trillion trillion trillion trillion trillion trillion trillion times bigger, IDK, yeah, that is a much bigger number.
And correspondingly, for every thought experiment of this kind, I do think there is often a number that will just rip through your assumptions and your tradeoffs. There are just really very very very big numbers.
Like, sure, we all agree our abstraction break here, and I am not confident you can’t find any hardening of abstraction that make the tradeoff come out in the direction of the size of the number really absolutely not mattering at all, but I think that would be a violation of the whole point of the exercise. Like, clearly we can agree that we assign a non-zero value to a marginal shrimp. We value that marginal shrimp for a lot of different reasons, but like, you probably value it for reasons that does include things like the richness of its internal experience, and the degree to which it differs from other shrimp, and the degree to which it contributes to an ecosystem, and the degree to which it’s an interesting object of trade, and all kinds of reasons. Now, if we want to extrapolate that value to 10^100, those things still are there, we can’t just start ignoring them.
Like, I would feel more sympathetic to this simplification if the author of the post was a hardcore naive utilitarian, but they self-identify as a kantian. Kantianism is a highly contextual ethical theory that clearly cares about a bunch of different details of the shrimp, so I don’t get the sense the author wants us to abstract away everything but some supposed “happiness qualia” or “suffering qualia” from the shrimp.
I agree that this is a thing people often like to invoke, but it feels to me a lot like people talking about billionaires and not noticing the classical crazy arithmetic errors like
Isn’t it the opposite? It’s a defence against providing too-low numbers, it’s specifically to ensure that even infinitesimally small preferences are elicited with certainty.
Bundling up all “this seems like a lot” numbers into the same mental bucket, and then failing to recognize when a real number is not actually as high as in your hypothetical, is certainly an error one could make here. But I don’t see an exact correspondence...
In the billionaires case, a thought-experimenter may invoke the hypothetical of “if a wealthy person had enough money to lift everyone out of poverty while still remaining rich, wouldn’t them not doing so be outrageous?”, while inviting the audience to fill-in the definitions of “enough money” and “poverty”. Practical situations might then just fail to match that hypothetical, and innumerate people might fail to recognize that, yes. But this doesn’t mean that that hypothetical is fundamentally useless to reason about, or that it can’t be used to study some specific intuitions/disagreements. (“But there are no rich people with so much money!” kind of maps to “but I did have breakfast!”.)
And in the shrimps case, hypotheticals involving a “very-high but not abstraction-breaking” number of shrimps are a useful tool for discussion/rhetoric. It allows to establish agreement/disagreement on “shrimp experiences have inherent value at all”, a relatively simple question that could serve as a foundation for discussing other, more complicated and contextual ones. (Such as “how much should I value shrimp experiences?” or “but do enough shrimps actually exist to add up to more than a human?” or “but is Intervention X to which I’m asked to donate $5 going to actually prevent five dollars’ worth of shrimp suffering?”.)
Like, I think having a policy of always allowing abstraction breaks would just impoverish the set of thought experiments we would be able consider and use as tools. Tons of different dilemmas would collapse to Pascal’s mugging or whatever.
Like, I would feel more sympathetic to this simplification if the author of the post was a hardcore naive utilitarian, but they self-identify as a kantian. Kantianism is a highly contextual ethical theory that clearly cares about a bunch of different details of the shrimp, so I don’t get the sense the author wants us to abstract away everything but some supposed “happiness qualia” or “suffering qualia” from the shrimp.
Hmm… I think this paragraph at the beginning is what primed me to parse it this way:
Merriam-Webster defines torture as “the infliction of intense pain (as from burning, crushing, or wounding) to punish, coerce, or afford sadistic pleasure.” So I remind the reader that it is part of the second thought experiment that the shrimp are sentient.
Why would we need this assumption[1], if the hypothetical weren’t centrally about the inherent value of the shrimps/shrimp qualia, and the idea that it adds up? The rest of that essay also features no discussion of the contextual value that the existence of a shrimp injects into various diverse environments in which it exists, etc. It just throws the big number around, while comparing the value of shrimps to the value of eating a bag of skittles, after having implicitly justified shrimps having value via shrimps having qualia.
I suppose it’s possible that if I had the full context of the author’s writing in mind, your interpretation would have been obviously correct[2]. But the essay itself reads the opposite way to me.
Why would we need this assumption[1], if the hypothetical weren’t centrally about the inherent value of the shrimps/shrimp qualia, and the idea that it adds up? The rest of that essay also features no discussion of the contextual value that the existence of a shrimp injects into various diverse environments in which it exists, etc. It just throws the big number around, while comparing the value of shrimps to the value of eating a bag of skittles, after having implicitly justified shrimps having value via shrimps having qualia.
I agree probably I implied a bit too much contextualization. Like, I agree the post has a utilitarian bend, but man, I just really don’t buy the whole “let’s add up qualia” as any basis of moral calculation, that I find attempts at trying to create a “pure qualia shrimp” about as confused and meaningless as trying to argue that 7 bees are more important than a human. “qualia” isn’t a thing that exists. The only thing that exists are your values in all of their complexity and godshatteredness. You can’t make a “pure qualia shrimp”, it doesn’t many any philosophical sense, pure qualia isn’t real.
And I agree that maybe the post was imagining some pure qualia juice, and I don’t know, maybe in that case it makes sense to dismiss it by doing a reductio ad absurdum on qualia juice, but I don’t currently buy it. I think that both wouldn’t be engaging with the good parts of the author, and also be kind of a bad step in the discourse (like, the previous step was understanding why it doesn’t make sense for 7 bees to be more important than a human, for a lot of different reasons and very robustly and within that discourse, it’s actually quite important to understand why 10^100 shrimp might actually be more important than a human, under at least a lot of reasonable set of assumptions).
I just really don’t buy the whole “let’s add up qualia” as any basis of moral calculation
Same, honestly. To me, many of these thought experiments seem decoupled from anything practically relevant. But it still seems to me that people often do argue from those abstracted-out frames I’d outlined, and these arguments are probably sometimes useful for establishing at least some agreement on ethics. (I’m not sure how a full-complexity godshatter-on-godshatter argument would even look like (a fistfight, maybe?), and am very skeptical it’d yield any useful results.)
Anyway, it sounds like we mostly figured out what the initial drastic disconnect between our views here was caused by?
Even if one tries to elevate your family above everything else, it is commonly accepted that it is not moral to sacrifice all of society for just your family, or threaten large scale catastrophe.
This just means that “elevate your family above everything else” is not an approved-of moral principle, not that it somehow doesn’t work on its own terms. In any case this is not a problem with multi-tier morality, it’s just a disagreement on what the tiers should be.
Similarly as you elevate the interests of your nation above other things, at a sufficient scale the interests of the rest of the world poke their way into your decision-making in substantial ways again.
This, on the other hand, is a matter of instrumental values, not terminal ones. There is once again no problem here with multi-tier morality.
Even if you try to do nothing but elevate the interests of animal life, we have still decided that it is not ethical to destroy even fully abiological and definitely not complicated plant-based ecosystems for those interests, if the harm is sufficiently large.
Same reply as to the first point. (Also, who has ever advocated so weirdly drawn a moral principle as “do nothing but elevate the interests of animal life”…?)
It doesn’t matter how many shrimp it is.
That is false. The numbers are very big. There are numbers so big that the very presence of specfiying them would encode calculations capable of simulating universes full of healthy and happy humans. It absolutely matters how big this kind of number is.
It doesn’t matter how big the numbers are, because the moral value of shrimp does not aggregate like that. If it were 3^^^3 shrimp, it still wouldn’t matter.
Again, we are talking about so many shrimp that it would be exceedingly unlike for this number of shrimp, if left under the auspices of gravity, to form their own planets and solar systems and galaxies in which life thrives and which other non-shrimp intelligences form.
Now you’re just smuggling in additional hypothesized entities and concerns. Are we talking about shrimp, or about something else? This is basically a red herring.
That aside—no, the numbers really don’t matter, because that’s just not how moral value of shrimp works, in any remotely sensible moral system. A trillion shrimp do not have a million times the moral value of a million shrimp. If your morality says that they do, then your morality is broken.
A trillion shrimp do not have a million times the moral value of a million shrimp. If your morality says that they do, then your morality is broken.
Nobody was saying this! The author of the post in question also does not believe this!
I am not a hedonic utilitarian. I do not think that a trillion shrimp have a million times the moral value of a million shrimp. That is a much much stronger statement than whether there exists any number of shrimp that might be worth more than a human. All you’ve done here is to set up a total strawman that nobody was arguing for and knocked it down.
… 1,000 times the moral value of a million shrimp?
… 10 times the moral value of a million shrimp?
… 1.1 times the moral value of a million shrimp?
… some other multiplicative factor, larger than 1, times the moral value of a million shrimp?
If the answer is “no” to all of these, then that seems like it would mean that you already agree with me, and your previous comments here wouldn’t make any sense. So it seems like the answer has to be “yes” to something in that list.
But then… my response stands, except with the relevant number changed.
On the other hand, you also say:
I am not a hedonic utilitarian.
I… don’t understand how you could be using this term that would make this a meaningful or relevant thing to say in response to my comment. Ok, you’re not a hedonic utilitarian, and thus… what?
Is the point that your claim about saving 10100 shrimp instead of one human isn’t insane… was actually not a moral claim at all, but some other kind of claim (prudential, for instance)? No, that doesn’t seem to work either, because you wrote:
No, being extremely overwhelmingly confident about morality such that even if you are given a choice to drastically alter 99.999999999999999999999% of the matter in the universe, you call the side of not destroying it “insane” for not wanting to give up a single human life, a thing we do routinely for much weaker considerations, is insane.
So clearly this is about morality…
… yeah, I can’t make any sense of what you’re saying here. What am I missing?
… 1,000 times the moral value of a million shrimp?
… 10 times the moral value of a million shrimp?
… 1.1 times the moral value of a million shrimp?
… some other multiplicative factor, larger than 1, times the moral value of a million shrimp?
I don’t know, seems like a very hard question, and I think will be quite sensitive to a bunch of details of the exact comparison. Like, how much cognitive diversity is there among the shrimp? Are the shrimps forming families and complicated social structures, or are they all in an isolated grid? Are they providing value to an extended ecosystem of other life? How rich is the life of these specific shrimp?
I would be surprised if the answer basically ever turned out to be less than 1.1, and surprised if it ever turned out to be more than 10,000.
But then… my response stands, except with the relevant number changed.
I don’t think your response said anything except to claim that a linear relationship between shrimp and values seems to quickly lead to absurd conclusions (or at least that is what I inferred from your claim of saying that a trillion shrimp is not a million times more valuable than a million shrimp). I agree with that as a valid reductio ad absurdum, but given that I see no need for linearity here (simply any ratio, which could even differ with the scale and details of the scenario), I don’t see how your response stands.
… yeah, I can’t make any sense of what you’re saying here. What am I missing?
I have little to go off of besides to repeat myself, as you have given me little to work with besides repeated insistence that what I believe is wrong or absurd. My guess is my meaning is more clear (though probably still far from perfectly clear) to other readers.
I don’t know, seems like a very hard question, and I think will be quite sensitive to a bunch of details of the exact comparison. Like, how much cognitive diversity is there among the shrimp? Are the shrimps forming families and complicated social structures, or are they all in an isolated grid? Are they providing value to an extended ecosystem of other life? How rich is the life of these specific shrimp?
I mean… we know the answers to these questions, right? Like… shrimp are not some sort of… un-studied exotic form of life. (In any case it’s a moot point, see below.)
I would be surprised if the answer basically ever turned out to be less than 1.1, and surprised if it ever turned out to be more than 10,000.
Right, so, “some … multiplicative factor, larger than 1”. That’s what I assumed. Whether that factor is 1 million, or 1.1, really doesn’t make any difference to what I wrote earlier.
I don’t think your response said anything except to claim that a linear relationship between shrimp and values seems to quickly lead to absurd conclusions (or at least that is what I inferred from your claim of saying that a trillion shrimp is not a million times more valuable than a million shrimp). I agree with that as a valid reductio ad absurdum, but given that I see no need for linearity here (simply any ratio, which could even differ with the scale and details of the scenario), I don’t see how your response stands.
No, my point is that any factor at all that is larger than 1, and remains larger than 1 as numbers increase, leads to absurd conclusions. (Like, for example, the conclusion that there is some number of shrimp such that that many shrimp are worth more than a human life.)
Given this correction, do you still think that I’m strawmanning or misunderstanding your views…? (I repeat that linearity is not the target of my objection!)
No, my point is that any factor at all that is larger than 1, and remains larger than 1 as numbers increase
I mean, clearly you agree that two shrimp are more important than one shrimp, and continues to be more important (at least for a while) as the numbers increase. So no, I don’t understand what you are saying, as nothing you have said appears sensitive to any numbers being different, and clearly for small numbers you agree that these comparisons must hold.
I agree there is a number big enough where eventually you approach 1, nothing I have said contradicts that. As in, my guess is the series of the value of shrimp as n goes to infinity does not diverge but eventually converges on some finite number (though especially with considerations like boltzman brains and quantum uncertainty and matter/energy density does seem confusing to think about).
It seems quite likely to me that this point of convergence is above the value of a human life, as numbers can really get very big, there are a lot of humans, and shrimp are all things considered pretty cool and interesting and a lot of shrimp seem like they would give rise to a lot of stuff.
I mean, clearly you agree that two shrimp are more important than one shrimp
Hm… no, I don’t think so. Enough shrimp to ensure that there keep being shrimp—that’s worth more than one shrimp. Less shrimp than that, though—nah.
I agree there is a number big enough where eventually you approach 1, nothing I have said contradicts that. As in, my guess is the series of the value of shrimp as n goes to infinity does not diverge but eventually converge on some finite number, though it does feel kind of confusing to think about.
Sure, this is all fine (and nothing that I have said contradicts you believing this; it seems like you took my objection to be much narrower than it actually was), but you’re saying that this number is much larger than the value of a human life. That’s the thing that I’m objecting to.
I’ll mostly bow out at this point, but one quick clarification:
but you’re saying that this number is much larger than the value of a human life
I didn’t say “much larger”! Like, IDK, my guess is there is some number of shrimp for which its worth sacrificing a thousand humans, which is larger, but not necessarily “much”.
My guess is there is no number, at least in the least convenient world where we are not talking about shrimp galaxies forming alternative life forms, for which it’s worth sacrificing 10 million humans, at least at current population levels and on the current human trajectory.
10 million is just a lot, and humanity has a lot of shit to deal with, and while I think it would be an atrocity to destroy this shrimp-gigaverse, it would also be an atrocity to kill 10 million people, especially intentionally.
Edit: Nevermind, evidently I’ve not thought this through properly. I’m retracting the below.
The naïve formulations of utilitarianism assume that all possible experiences can be mapped to scalar utilities lying onthe same,continuousspectrum, and that experiences’ utility is additive. I think that’s an error.This is how we get the frankly insane conclusions like “you should save10100shrimps instead of one human” oreveryone’s perennial favorite, “if you’re choosing between one person getting tortured for 50 years or some amount of peopleNgetting a dust speck into their eye, there must be anNbig enough that the torture is better”. I disagree with those. I would sacrifice an arbitrarily large amount of shrimps to save one human, and there’s noNbig enough for me to pick torture. I don’t care if that disagrees with what the math says: if the math says something else, it’s the wrong math and we should find a better one.Here’s a sketch of how I think such better math might look like:There’s a totally ordered,discreteset of “importance tiers” of experiences.Withintiers, the utilities are additive: two people getting dust specks is twice as bad as one person getting dust-speck’d, two people being tortured is twice as bad as one person being tortured, eating a chocolate bar twice per week is twice as good as eating a bar once per week, etc.Acrosstiers, the tier ordering dominates: if we’re comparing some experience belonging to a higher tier to any combination of experiences from lower tiers, the only relevant consideration is the sign of the higher-tier experience. No amount of people getting dust-speck’d, and no amount of dust-speck events sparsely distributed throughout one person’s life, can ever add up to anything as important as a torture-level experience.[1]Intuitively, tiers correspond to the size of effect a given experience has on a person’s life:A dust speck is a minor inconvenience that is forgotten after a second. If we zoom out and consider the person’s life at a higher level, say on the scale of a day, this experience rounds offexactlyto zero, rather than to an infinitesimally small but nonzero value. (Again, on the assumption of a median dust-speck event, no emergent or butterfly effects.)Getting yelled at by someone might ruin the entirety of someone’s day, but is unlikely to meaningfully change the course of their life. Experiences on this tier are more important than any amount of dust-speck experiences, but any combination of them rounds down to zero from a life’s-course perspective.Getting tortured is likely to significantly traumatize someone, to have a lasting negative impact on their life. Experiences at this tier ought to dominate getting-yelled-at as much as getting-yelled-at dominates dust specks.Physically, those “importance tiers” probably fall out of the hierarchy ofnatural abstractions. Like everything else, a person’s life has different levels of organization. Any detail in how the high-level life-history goes is incomparably more important than any experience which is only locally relevant (which fails to send long-distance ripples throughout the person’s life). Butterfly effects are then the abstraction leaks (low-level events that perturb high-level dynamics), etc.I didn’t spend much time thinking about this, so there may be some glaring holes here, but this already fits my intuitionsmuchbetter.I think we can expand that framework to cover “tiers of sentience”:If shrimps have qualia, it might be thatanyqualia they’re capable of experiencing belong to lower-importance tiers, compared to the highest-tier human qualia.Simultaneously, it might be the case that the highest-importance shrimp qualia are on the level of the lower-importance human qualia.Thus, it might be reasonable to sacrifice the experience of eating a chocolate bar to save10100shrimps, even if you’d never sacrifice a person’s life (or even make someone cry) to save any amount of shrimps.This makes some intuitive sense, I think. The model above assumes that “local” experiences, which have no impact on the overarching pattern of a person’s life, are arbitrarily less important than that overarching pattern. What if we’re dealing with beings whose internal liveshaveno such overarching patterns, then? A shrimp’s interiority is certainly less complex than that of a human, so it seems plausible that its life-experience lacks comparably rich levels of organization (something like “the ability to experience what is currently happening as part of the tapestry spanning its entire life, rather than as an isolated experience”). So all of its qualia would be comparable only with the “local” experiences of a human, for some tier of locality: we would have direct equivalence between them.One potential issue here is that this implies the existence of utility monsters: some divine entities such that they can have experiences incomparably more important than any experience a human can have. I guess it’s possible that if I understood qualia better, I would agree with that, but this seems about as anti-intuitive as “shrimps matter as much as humans”. My intuition is that sapient entitiestopthe hierarchy of moral importance, that there’s nothing meaningfully “above” them. So that’s an issue.One potential way to deal with this is to suggest that what distinguishes sapient/”generally intelligent” entities is not that they’re the only entities whose experiences matter, but that they have the ability to (learn to) have experiences ofarbitrarily hightiers. And indeed: the whole shtick of “general intelligence” is that it should allow you to learn and reason about arbitrarily complicated systems of abstraction/multi-level organization. If the importance tiers of experiences really have something to do with the richness of the organization of the entity’s inner life, this resolves things neatly. Now:Non-sapient entities may have experiences of nonzero importance.No combination of non-sapient experiences can compare to the importance of a sapient entity’s life.“Is sapient” tops the hierarchy of moral relevance: there’s no type of entity that is fundamentally “above”.Two caveats here are butterfly effects and emergent importance:Getting a dust speck at the wrong moment might kill you (if you’re operating dangerous machinery) or change the trajectory of your life (if this minor inconvenience is the last straw that triggers a career-ruining breakdown). We have to assume such possibilities away: the experiences exist “in a vacuum”. Doing otherwise would violate the experimental setup, dragging in various practical considerations, instead of making it purely about ethics.So we assume that each dust-speck event always has the “median” amount of impact on a person’s life, even if you scale the amount of dust-speck events arbitrarily.Getting 1000 dust specks one after another adds up to something more than 1000 single-dustspeck experiences; it’s worse than getting a dust speck once per day for 1000 days. More intuitively, experiencing a 10⁄10 pain for one millisecond is not comparable to experiencing 10⁄10 pain for 10 minutes. There are emergent effects at play, and like with butterflies, we must assume them away for experimental purity.So if we’re talking aboutMexperiences from within the same importance tier, they’re assumed to be distributed such that they don’t add up to a higher-tier experience.Note that those are very artificial conditions. In real life, both of those arevery much in play. Any lower-tier experience has a chance of resulting into a higher-tier experience, and every higher-tier experience emerges from (appropriately distributed) lower-tier experiences. In our artificial setup, we’re assuming certain knowledge that no butterfly effects would occur, and that a lower-tier event contributes to no higher-tier pattern.Relevance: There’s reasoning that goes, “if you ever drive to the store to get a chocolate bar, you’re risking crashing into and killing someone, therefore you don’t value people’s lives infinitely more than eating chocolate”. I reject it on the above grounds. Systematically avoiding all situations where you’re risking someone’s life in exchange for a low-importance experience would assemble into a high-importance life-ruining experience for you (starving to death in your apartment, I guess?). Given that, we’re now comparing same-tier experiences, andhereI’m willing to be additive, calculating that killing a person with very low probability is better than killing yourself (by a thousand cuts) with certainty.Besides uncertainty, there’s the problem of needing to pick cutoffs between tiers in a ~continuous space of ‘how much effect does this have on a person’s life?’, with things slightly on one side or the other of a cutoff being treated very differently.
I agree with the intuition that this is important, but I think that points toward just rejecting utilitarianism (as in utility-as-a-function-purely-of-local-experiences, not consequentialism).
It’s worth noting that everything funges: some large number of experiences of eating a chocolate bar can be exchanged for avoiding extreme human suffering or death. So, if you lexicographically put higher weight on extreme human suffering or death, then you’re willing to make extreme tradeoffs (e.g.1030 chocolate bar experiences) in terms of mundane utility for saving a single life. I think this easily leads to extremely unintuitive conclusions, e.g. you shouldn’t ever be willing to drive to a nice place. See also Trading off Lives.
I find your response to this sort of argument under “Relevance: There’s reasoning that goes” in the footnote very uncompelling as it doesn’t apply to marginal impacts.
Ok, but if you don’t drive to the store one day to get your chocolate, then that is not a major pain for you, yes? Why not just decide that next time you want chocolate at the store, you’re not going to go out and get it because you may run over a pedestrian? Your decision there doesn’t need to impact your other decisions.
Then you ought to keep on making that choice until you are right on the edge of those choices adding up to a first-tier experience, but certainly below.
This logic generalizes. You will always be pushing the lower tiers of experience as low as they can go before they enter the upper-tiers of experience. I think the fact that your paragraph above is clearly motivated reasoning here (instead of “how can I actually get the most bang for my buck within this moral theory” style reasoning) shows that you agree with me (and many others) that this is flawed.
We can easily ban speed above 15km/h for any vehicles except ambulances. Nobody starves to death in this scenario, it’s just very inconvenient. We value convenience lost in this scenario more than lives lost in our reality, so we don’t ban high-speed vehicles.
Ordinal preferences are bad and insane and they are to be avoided.
What’s really wrong with utilitarianism is that you can’t, actually, sum utilities: it’s a type error, because utilities are invariant up to affine transform, what would their sum mean?
The problem, I think, that humans naturally conflate two types of altruism. First type is caring about other entities mental state. Second type is “game-theoretic” or “alignment-theoretic” altruism: generalized notion of what does that mean to care about someone else’s values. Roughly, I think that good type of the second type of altruism requires you to do fair bargaining in interests of entity you are being altruistic towards.
Let’s take “World Z” thought experiment. The problem from the second type altruism perspective is that total utilitarian gets very large utility from this world, while all inhabitants of this world, by premise, get very small utility per person, which is unfair division of gains.
One may object: why not create entities who think that very small share of gains is fair? My answer is that if entity can be satisfied with infinitesimal share of gains, it also can be satisfied with infinitesimal share of anthropic measure, i.e., non-existence, and it’s more altruistic to look for more demanding entities to fill universe with.
My general problem with animal welfare from bargaining perspective is that most of animals probably don’t have sufficient agency to have any sort of representative in bargaining. We can imagine CEV of shrimp which is negative utilitarian and wants to kill all shrimp, or positive utilitarian which thinks that even very painful existence is worth it, or CEV that prefers shrimp swimming in heroin, or something human-like, or something totally alien, and sum of these guesses probably sums up to “do not torture and otherwise do as you please”.
Huh, I expected better from you.
No, it is absolutely not insane to save 10100 shrimp instead of one human! I think the case for insanity for the opposite is much stronger! Please, actually think about how big 10100 is. We are talking about more shrimp than atoms in the universe. Trillions upon trillions of shrimp more than atoms in the universe.
This is a completely different kind of statement than “you should trade of seven bees against a human”.
No, being extremely overwhelmingly confident about morality such that even if you are given a choice to drastically alter 99.999999999999999999999% of the matter in the universe, you call the side of not destroying it “insane” for not wanting to give up a single human life, a thing we do routinely for much weaker considerations, is insane.
The whole “tier” thing obviously fails. You always end up dominated by spurious effects on the highest tier. In a universe with any appreciable uncertainty you basically just ignore any lower tiers, because you can always tell some causal story of how your actions might infinitesimally affect something, and so you completely ignore it. You might as well just throw away all morality except the highest tier, it will never change any of your actions.
I’m normally in favor of high decoupling, but this thought experiment seems to take it well beyond the point of absurdity. If I somehow found myself in control of the fate of 10^100 shrimp, the first thing I’d want to do is figure out where I am and what’s going on, since I’m clearly no longer in the universe I’m familiar with.
Yeah, I mean, that also isn’t a crazy response. I think being like “what would it even mean to have 10^30 times more shrimp than atoms? Really seems like my whole ontology about the world must be really confused” also seems fine. My objection is mostly to “it’s obvious you kill the shrimp to save the human”.
Oh, easy, it just implies you’re engaging in acausal trade with a godlike entity residing in some universe dramatically bigger than this one. This interpretation introduces no additional questions or complications whatsoever.
Hm. Okay, so my reasoning there went as follows:
I’m not sure where you’d get off this train, but I assume the last bullet-point would do this? I. e., that you would argue that holding the possibility of shrimps having human-level qualia is salient in a way it’s not for rocks?
Yeah, that seems valid. I might’ve shot from the hip on that one.
I have a story for how that would make sense, similarly involving juggling inside-model and outside-model reasoning, but, hm, I’m somehow getting the impression my thinking here is undercooked/poorly presented. I’ll revisit that one at a later time.
Edit: Incidentally, any chance the UI for retracting a comment could be modified? I have two suggestions here:
I’d like to be able to list a retraction reason, ideally at the top of the comment.
The crossing-out thing makes it difficult to read the comment afterwards, and some people might want to be able to do that. Perhaps it’s better to automatically put the contents into a collapsible instead, or something along those lines?
You should be able to strike out the text manually and get the same-ish effect, or leave a retraction notice. The text being hard to read is intentional so that it really cannot be the case that someone screenshots it or skims it without noticing that it is retracted.
I think it pretty clearly is insane to save 10100 shrimp instead of one human! It doesn’t matter how many shrimp it is. The moral value of shrimp does not aggregate like that.
The grandparent comment is obviously correct in its description of the problem. (Whether the proposed solution works is another question entirely.)
That’s just not true.
One obvious approach is: once you get to the point where noise (i.e., non-systematic error) dominates your calculations for a particular tier, you ignore that tier and consider the next lower tier. (This is also approximately equivalent to a sort of reasoning which we use all the time, and it works pretty straightforwardly, without giving rise to the sorts of pathologies you allude to.)
There is no clear or adequate definition of what “[noise] dominated your calculations” means. Maybe you can provide one, but I’ve never seen anyone provide any such definition, or made much headway in doing so.
Creating such a definition of noise has proven to be quite hard, as it’s extremely rare that someone is willing to ignore absolutely all stakes at lower levels of concern or abstraction, no matter the magnitude.
Even if one tries to elevate your family above everything else, it is commonly accepted that it is not moral to sacrifice all of society for just your family, or threaten large scale catastrophe.
Similarly as you elevate the interests of your nation above other things, at a sufficient scale the interests of the rest of the world poke their way into your decision-making in substantial ways again.
Even if you try to do nothing but elevate the interests of animal life, we have still decided that it is not ethical to destroy even fully abiological and definitely not complicated plant-based ecosystems for those interests, if the harm is sufficiently large.
You maybe want to propose we make decisions this way, but humanity absolutely does not generally make decisions this way. When people have to make decisions they usually decide on some rough thresholds for noticing tradeoffs across domains, and indeed decide and re-evaluate how important something is when a decision affects something in a different domain at much larger scale than other decisions.
Look, there are numbers that are very very big.[1]
Again, we are talking about so many shrimp that it would be exceedingly unlike for this number of shrimp, if left under the auspices of gravity, to form their own planets and solar systems and galaxies in which life thrives and which other non-shrimp intelligences form. A number so incomprehensibly big. A galaxy within each atom of our universe made out of shrimp. One can argue it’s meaningless to talk about numbers this big, and while I would dispute that, it’s definitely a much more sensible position than trying to take a confident stance to destroy or substantially alter a set of things so large that it vastly eclipses in complexity and volume and mass and energy all that has ever or will ever exist by a trillion-fold.
Indeed, there are numbers so big that the very presence of specfiying them would encode calculations capable of simulating universes full of healthy and happy humans. The space of numbers is really very large.
Okay, while I’m hastily backpedaling from the general claims I made, I am interested in your take on the first half of this post. I think there’s a difference between talking about an actual situation, full complexities of the messy reality taken into account, where a supernatural being physically shows up and makes you really decide between a human and 10100 of shrimp, and a thought experiment where “you” “decide” between a “human” and 10100 of “shrimp”. In the second case, my model is that we’re implicitly operating in an abstracted-out setup where the terms in the quotation marks are, essentially, assumed ontologically basic, and matching our intuitive/baseline expectations about what they mean.
While, within the hypothetical, we can still have some uncertainty over e. g. the degree of the internal experiences of those “shrimp”, I think we have to remove considerations like “the shrimp will be deposited into a physical space obeying the laws of physics where their mass may form planets and galaxies” or “with so many shrimp, it’s near-certain that the random twitches of some subset of them would spontaneously implement Boltzmann civilizations of uncountably many happy beings”.
IMO, doing otherwise is a kind of “dodging the hypothetical”, no different from considering it very unlikely that the supernatural being really has control over 10100 of something, and starting to argue about this instead.
I agree there is something to this, but when actually thinking about tradeoffs that do actually have orders of magnitude of variance in them, which is ultimately where this kind of reasoning is most useful (not 100 orders of magnitude, but you know 30-50 are not unheard of), this kind of abstraction would mostly lead you astray, and so I don’t think it’s a good norm for how to take thought experiments like this.
Like, I agree there are versions of the hypothetical that are too removed, but ultimately, I think a central lesson of scope sensitivity is that having a lot more of something often means drastic qualitative changes that come with that drastic change in quantity. Having 10 flop/s of computation is qualitatively different to having 10^10 flop/s. I can easily imagine someone before the onset of modern computing saying “look, how many numbers do you really need to add in everyday life? What is even the plausible purpose of having 10^10 flop/s available? For what purpose would you need to possibly perform 10 billion operations per second? This just seems completely absurd. Clearly the value of a marginal flop goes to zero long before that. That is more operations than all computers[1] in the world have ever ever done, in all of history, combined. What could possibly be the point of this?”
And of course, such a person would be sorely mistaken. And framing the thought experiment as “well, no, I think if you want to take this thought experiment seriously you should think about how much you would be willing to pay for the 10 billionth operation of the kind that you are currently doing, which is clearly zero. I don’t want you to hypothesize some kind of new art forms or applications or computing infrastructure or human culture, which feel like they are not the point of this exercise, I want you to think about the marginal item in isolation” would be pointless. It would be emptying the exercise and tradeoff of any of its meaning. If we ever face a choice like this or, anything remotely like it, of course how the world adapts around this, and the applications that get built for it, and the things that aren’t obvious from when you first asked the question matter.
And to be clear, I think there is also a real conversation going on here about whether maybe, even if you isolated each individual shrimp into a tiny pocket universe, and you had no way of ever seeing them or visiting the great shrimp rift (a natural wonder clearly greater than any natural wonder on earth), and all you knew for sure was that it existed somewhere outside of your sphere of causal influence, and the shrimp never did anything more interesting than current alive shrimp, whether it would still be worth it to kill a human. And for that, I think the answer is less obviously “yes” or “no”, though my guess is the 10^100 causally isolated shrimp ultimate still enrich the universe more than a human would, and are worth preserving more, but it’s less clear.
And we could focus on that if we want to, I am not opposed to it, but it’s not clearly what the OP that sparked this whole thread was talking about, and I find it less illuminating than other tradeoffs, and it would still leave me with a strong reaction that at least the reason for why the answer might be “kill the shrimp” is definitely absolutely different from the answer to why you should not kill a human to allow 7 bees to live for a human lifetime.
here refering to the human occupancy of “computer” i.e. someone in charge of performing calculations
Yeah, that’s more what I had in mind. Illusion of transparency, I suppose.
Certainly, and it’s an important property of reality. But I don’t think this is what extreme hypotheticals such as the one under discussion actually want to talk about (even if you think this is a more important question to focus on)?
Like, my model is that the 10100 shrimp in this hypothetical are not meant to literally be 10100 shrimp. They’re meant to be "10100" “shrimp”. Intuitively, this is meant to stand for something like “a number of shrimp large enough for any value you’re assigning them to become morally relevant”. My interpretation is that the purpose of using a crazy-large number is to elicit that preference with certainty, even if it’s epsilon; not to invite a discussion about qualitative changes in the nature of crazy-large quantities of arbitrary matter.
The hypothetical is interested in shrimp welfare. If we take the above consideration into account, it stops being about “shrimp” at all (see the shrimps-to-rocks move). The abstractions within which the hypothetical is meant to live break.
And yes, if we’re talking about a physical situation involving the number 10100, the abstractions in question really do break under forces this strong, and we have to navigate the situation with the broken abstractions. But in thought-experiment land, we can artificially stipulate those abstractions inviolable (or replace the crazy-high abstraction-breaking number with a very-high but non-abstraction-breaking number).
I agree that this is a thing people often like to invoke, but it feels to me a lot like people talking about billionaires and not noticing the classical crazy arithmetic errors like:
Like, in those discussions people are almost always trying to invoke numbers like “$1 trillion” as “a number so big that the force of the conclusion must be inevitable”, but like most of the time they just fail because the number isn’t big enough.
If someone was like “man, are you really that confident that a shrimp does not have morally relevant experience that you wouldn’t trade a human for a million shrimp?”, my response is “nope, sorry, 1 million isn’t big enough, that’s just really not that big of a number”. But if you give me a number a trillion trillion trillion trillion trillion trillion trillion trillion times bigger, IDK, yeah, that is a much bigger number.
And correspondingly, for every thought experiment of this kind, I do think there is often a number that will just rip through your assumptions and your tradeoffs. There are just really very very very big numbers.
Like, sure, we all agree our abstraction break here, and I am not confident you can’t find any hardening of abstraction that make the tradeoff come out in the direction of the size of the number really absolutely not mattering at all, but I think that would be a violation of the whole point of the exercise. Like, clearly we can agree that we assign a non-zero value to a marginal shrimp. We value that marginal shrimp for a lot of different reasons, but like, you probably value it for reasons that does include things like the richness of its internal experience, and the degree to which it differs from other shrimp, and the degree to which it contributes to an ecosystem, and the degree to which it’s an interesting object of trade, and all kinds of reasons. Now, if we want to extrapolate that value to 10^100, those things still are there, we can’t just start ignoring them.
Like, I would feel more sympathetic to this simplification if the author of the post was a hardcore naive utilitarian, but they self-identify as a kantian. Kantianism is a highly contextual ethical theory that clearly cares about a bunch of different details of the shrimp, so I don’t get the sense the author wants us to abstract away everything but some supposed “happiness qualia” or “suffering qualia” from the shrimp.
Isn’t it the opposite? It’s a defence against providing too-low numbers, it’s specifically to ensure that even infinitesimally small preferences are elicited with certainty.
Bundling up all “this seems like a lot” numbers into the same mental bucket, and then failing to recognize when a real number is not actually as high as in your hypothetical, is certainly an error one could make here. But I don’t see an exact correspondence...
In the billionaires case, a thought-experimenter may invoke the hypothetical of “if a wealthy person had enough money to lift everyone out of poverty while still remaining rich, wouldn’t them not doing so be outrageous?”, while inviting the audience to fill-in the definitions of “enough money” and “poverty”. Practical situations might then just fail to match that hypothetical, and innumerate people might fail to recognize that, yes. But this doesn’t mean that that hypothetical is fundamentally useless to reason about, or that it can’t be used to study some specific intuitions/disagreements. (“But there are no rich people with so much money!” kind of maps to “but I did have breakfast!”.)
And in the shrimps case, hypotheticals involving a “very-high but not abstraction-breaking” number of shrimps are a useful tool for discussion/rhetoric. It allows to establish agreement/disagreement on “shrimp experiences have inherent value at all”, a relatively simple question that could serve as a foundation for discussing other, more complicated and contextual ones. (Such as “how much should I value shrimp experiences?” or “but do enough shrimps actually exist to add up to more than a human?” or “but is Intervention X to which I’m asked to donate $5 going to actually prevent five dollars’ worth of shrimp suffering?”.)
Like, I think having a policy of always allowing abstraction breaks would just impoverish the set of thought experiments we would be able consider and use as tools. Tons of different dilemmas would collapse to Pascal’s mugging or whatever.
Hmm… I think this paragraph at the beginning is what primed me to parse it this way:
Why would we need this assumption[1], if the hypothetical weren’t centrally about the inherent value of the shrimps/shrimp qualia, and the idea that it adds up? The rest of that essay also features no discussion of the contextual value that the existence of a shrimp injects into various diverse environments in which it exists, etc. It just throws the big number around, while comparing the value of shrimps to the value of eating a bag of skittles, after having implicitly justified shrimps having value via shrimps having qualia.
I suppose it’s possible that if I had the full context of the author’s writing in mind, your interpretation would have been obviously correct[2]. But the essay itself reads the opposite way to me.
A pretty strong one, I think, since “are shrimp qualia of nonzero moral relevance?” is often the very point of many discussions.
Indeed, failing to properly familiarize myself with the discourse and the relevant frames before throwing in hot takes was my main blunder here.
I agree probably I implied a bit too much contextualization. Like, I agree the post has a utilitarian bend, but man, I just really don’t buy the whole “let’s add up qualia” as any basis of moral calculation, that I find attempts at trying to create a “pure qualia shrimp” about as confused and meaningless as trying to argue that 7 bees are more important than a human. “qualia” isn’t a thing that exists. The only thing that exists are your values in all of their complexity and godshatteredness. You can’t make a “pure qualia shrimp”, it doesn’t many any philosophical sense, pure qualia isn’t real.
And I agree that maybe the post was imagining some pure qualia juice, and I don’t know, maybe in that case it makes sense to dismiss it by doing a reductio ad absurdum on qualia juice, but I don’t currently buy it. I think that both wouldn’t be engaging with the good parts of the author, and also be kind of a bad step in the discourse (like, the previous step was understanding why it doesn’t make sense for 7 bees to be more important than a human, for a lot of different reasons and very robustly and within that discourse, it’s actually quite important to understand why 10^100 shrimp might actually be more important than a human, under at least a lot of reasonable set of assumptions).
Same, honestly. To me, many of these thought experiments seem decoupled from anything practically relevant. But it still seems to me that people often do argue from those abstracted-out frames I’d outlined, and these arguments are probably sometimes useful for establishing at least some agreement on ethics. (I’m not sure how a full-complexity godshatter-on-godshatter argument would even look like (a fistfight, maybe?), and am very skeptical it’d yield any useful results.)
Anyway, it sounds like we mostly figured out what the initial drastic disconnect between our views here was caused by?
Yeah, I think so, though not sure. But I feel good stopping here.
This just means that “elevate your family above everything else” is not an approved-of moral principle, not that it somehow doesn’t work on its own terms. In any case this is not a problem with multi-tier morality, it’s just a disagreement on what the tiers should be.
This, on the other hand, is a matter of instrumental values, not terminal ones. There is once again no problem here with multi-tier morality.
Same reply as to the first point. (Also, who has ever advocated so weirdly drawn a moral principle as “do nothing but elevate the interests of animal life”…?)
It doesn’t matter how big the numbers are, because the moral value of shrimp does not aggregate like that. If it were 3^^^3 shrimp, it still wouldn’t matter.
Now you’re just smuggling in additional hypothesized entities and concerns. Are we talking about shrimp, or about something else? This is basically a red herring.
That aside—no, the numbers really don’t matter, because that’s just not how moral value of shrimp works, in any remotely sensible moral system. A trillion shrimp do not have a million times the moral value of a million shrimp. If your morality says that they do, then your morality is broken.
Nobody was saying this! The author of the post in question also does not believe this!
I am not a hedonic utilitarian. I do not think that a trillion shrimp have a million times the moral value of a million shrimp. That is a much much stronger statement than whether there exists any number of shrimp that might be worth more than a human. All you’ve done here is to set up a total strawman that nobody was arguing for and knocked it down.
Ok. Do you think that a trillion shrimp have:
… 1,000 times the moral value of a million shrimp?
… 10 times the moral value of a million shrimp?
… 1.1 times the moral value of a million shrimp?
… some other multiplicative factor, larger than 1, times the moral value of a million shrimp?
If the answer is “no” to all of these, then that seems like it would mean that you already agree with me, and your previous comments here wouldn’t make any sense. So it seems like the answer has to be “yes” to something in that list.
But then… my response stands, except with the relevant number changed.
On the other hand, you also say:
I… don’t understand how you could be using this term that would make this a meaningful or relevant thing to say in response to my comment. Ok, you’re not a hedonic utilitarian, and thus… what?
Is the point that your claim about saving 10100 shrimp instead of one human isn’t insane… was actually not a moral claim at all, but some other kind of claim (prudential, for instance)? No, that doesn’t seem to work either, because you wrote:
So clearly this is about morality…
… yeah, I can’t make any sense of what you’re saying here. What am I missing?
I don’t know, seems like a very hard question, and I think will be quite sensitive to a bunch of details of the exact comparison. Like, how much cognitive diversity is there among the shrimp? Are the shrimps forming families and complicated social structures, or are they all in an isolated grid? Are they providing value to an extended ecosystem of other life? How rich is the life of these specific shrimp?
I would be surprised if the answer basically ever turned out to be less than 1.1, and surprised if it ever turned out to be more than 10,000.
I don’t think your response said anything except to claim that a linear relationship between shrimp and values seems to quickly lead to absurd conclusions (or at least that is what I inferred from your claim of saying that a trillion shrimp is not a million times more valuable than a million shrimp). I agree with that as a valid reductio ad absurdum, but given that I see no need for linearity here (simply any ratio, which could even differ with the scale and details of the scenario), I don’t see how your response stands.
I have little to go off of besides to repeat myself, as you have given me little to work with besides repeated insistence that what I believe is wrong or absurd. My guess is my meaning is more clear (though probably still far from perfectly clear) to other readers.
I mean… we know the answers to these questions, right? Like… shrimp are not some sort of… un-studied exotic form of life. (In any case it’s a moot point, see below.)
Right, so, “some … multiplicative factor, larger than 1”. That’s what I assumed. Whether that factor is 1 million, or 1.1, really doesn’t make any difference to what I wrote earlier.
No, my point is that any factor at all that is larger than 1, and remains larger than 1 as numbers increase, leads to absurd conclusions. (Like, for example, the conclusion that there is some number of shrimp such that that many shrimp are worth more than a human life.)
Given this correction, do you still think that I’m strawmanning or misunderstanding your views…? (I repeat that linearity is not the target of my objection!)
I mean, clearly you agree that two shrimp are more important than one shrimp, and continues to be more important (at least for a while) as the numbers increase. So no, I don’t understand what you are saying, as nothing you have said appears sensitive to any numbers being different, and clearly for small numbers you agree that these comparisons must hold.
I agree there is a number big enough where eventually you approach 1, nothing I have said contradicts that. As in, my guess is the series of the value of shrimp as n goes to infinity does not diverge but eventually converges on some finite number (though especially with considerations like boltzman brains and quantum uncertainty and matter/energy density does seem confusing to think about).
It seems quite likely to me that this point of convergence is above the value of a human life, as numbers can really get very big, there are a lot of humans, and shrimp are all things considered pretty cool and interesting and a lot of shrimp seem like they would give rise to a lot of stuff.
Hm… no, I don’t think so. Enough shrimp to ensure that there keep being shrimp—that’s worth more than one shrimp. Less shrimp than that, though—nah.
Sure, this is all fine (and nothing that I have said contradicts you believing this; it seems like you took my objection to be much narrower than it actually was), but you’re saying that this number is much larger than the value of a human life. That’s the thing that I’m objecting to.
I’ll mostly bow out at this point, but one quick clarification:
I didn’t say “much larger”! Like, IDK, my guess is there is some number of shrimp for which its worth sacrificing a thousand humans, which is larger, but not necessarily “much”.
My guess is there is no number, at least in the least convenient world where we are not talking about shrimp galaxies forming alternative life forms, for which it’s worth sacrificing 10 million humans, at least at current population levels and on the current human trajectory.
10 million is just a lot, and humanity has a lot of shit to deal with, and while I think it would be an atrocity to destroy this shrimp-gigaverse, it would also be an atrocity to kill 10 million people, especially intentionally.