I think there is also a real conversation going on here about whether maybe, even if you isolated each individual shrimp into a tiny pocket universe, and you had no way of ever seeing them or visiting the great shrimp rift (a natural wonder clearly greater than any natural wonder on earth), and all you knew for sure was that it existed somewhere outside of your sphere of causal influence, and the shrimp never did anything more interesting than current alive shrimp, whether it would still be worth it to kill a human
Yeah, that’s more what I had in mind. Illusion of transparency, I suppose.
Like, I agree there are versions of the hypothetical that are too removed, but ultimately, I think a central lesson of scope sensitivity is that having a lot more of something often means drastic qualitative changes in what it means to have that thing
Certainly, and it’s an important property of reality. But I don’t think this is what extreme hypotheticals such as the one under discussion actually want to talk about (even if you think this is a more important question to focus on)?
Like, my model is that the 10100 shrimp in this hypothetical are not meant to literally be10100 shrimp. They’re meant to be "10100" “shrimp”. Intuitively, this is meant to stand for something like “a number of shrimp large enough for any value you’re assigning them to become morally relevant”. My interpretation is that the purpose of using a crazy-large number is to elicit that preference with certainty, even if it’s epsilon; not to invite a discussion about qualitative changes in the nature of crazy-large quantities of arbitrary matter.
The hypothetical is interested in shrimp welfare. If we take the above consideration into account, it stops being about “shrimp” at all (see the shrimps-to-rocks move). The abstractions within which the hypothetical is meant to live break.
And yes, if we’re talking about a physical situation involving the number 10100, the abstractions in question really do break under forces this strong, and we have to navigate the situation with the broken abstractions. But in thought-experiment land, we can artificially stipulate those abstractions inviolable (or replace the crazy-high abstraction-breaking number with a very-high but non-abstraction-breaking number).
Like, my model is that the 10100 shrimp in this hypothetical are not meant to literally be10100 shrimp. They’re meant to be "10100" “shrimp”. Intuitively, this is meant to stand for something like “a number of shrimp large enough for any value you’re assigning them to become morally relevant”. My interpretation is that the purpose of using a crazy-large number is to elicit that preference with certainty, even if it’s epsilon; not to invite a discussion about qualitative changes in the nature of crazy-large quantities of arbitrary matter.
I agree that this is a thing people often like to invoke, but it feels to me a lot like people talking about billionaires and not noticing the classical crazy arithmetic errors like:
If Jeff Bezos’ net worth reaches $1 trillion, “he could literally end world poverty and give everyone $1 billion and he will still have $91.5 billion left.”
Like, in those discussions people are almost always trying to invoke numbers like “$1 trillion” as “a number so big that the force of the conclusion must be inevitable”, but like most of the time they just fail because the number isn’t big enough.
If someone was like “man, are you really that confident that a shrimp does not have morally relevant experience that you wouldn’t trade a human for a million shrimp?”, my response is “nope, sorry, 1 million isn’t big enough, that’s just really not that big of a number”. But if you give me a number a trillion trillion trillion trillion trillion trillion trillion trillion times bigger, IDK, yeah, that is a much bigger number.
And correspondingly, for every thought experiment of this kind, I do think there is often a number that will just rip through your assumptions and your tradeoffs. There are just really very very very big numbers.
Like, sure, we all agree our abstraction break here, and I am not confident you can’t find any hardening of abstraction that make the tradeoff come out in the direction of the size of the number really absolutely not mattering at all, but I think that would be a violation of the whole point of the exercise. Like, clearly we can agree that we assign a non-zero value to a marginal shrimp. We value that marginal shrimp for a lot of different reasons, but like, you probably value it for reasons that does include things like the richness of its internal experience, and the degree to which it differs from other shrimp, and the degree to which it contributes to an ecosystem, and the degree to which it’s an interesting object of trade, and all kinds of reasons. Now, if we want to extrapolate that value to 10^100, those things still are there, we can’t just start ignoring them.
Like, I would feel more sympathetic to this simplification if the author of the post was a hardcore naive utilitarian, but they self-identify as a kantian. Kantianism is a highly contextual ethical theory that clearly cares about a bunch of different details of the shrimp, so I don’t get the sense the author wants us to abstract away everything but some supposed “happiness qualia” or “suffering qualia” from the shrimp.
I agree that this is a thing people often like to invoke, but it feels to me a lot like people talking about billionaires and not noticing the classical crazy arithmetic errors like
Isn’t it the opposite? It’s a defence against providing too-low numbers, it’s specifically to ensure that even infinitesimally small preferences are elicited with certainty.
Bundling up all “this seems like a lot” numbers into the same mental bucket, and then failing to recognize when a real number is not actually as high as in your hypothetical, is certainly an error one could make here. But I don’t see an exact correspondence...
In the billionaires case, a thought-experimenter may invoke the hypothetical of “if a wealthy person had enough money to lift everyone out of poverty while still remaining rich, wouldn’t them not doing so be outrageous?”, while inviting the audience to fill-in the definitions of “enough money” and “poverty”. Practical situations might then just fail to match that hypothetical, and innumerate people might fail to recognize that, yes. But this doesn’t mean that that hypothetical is fundamentally useless to reason about, or that it can’t be used to study some specific intuitions/disagreements. (“But there are no rich people with so much money!” kind of maps to “but I did have breakfast!”.)
And in the shrimps case, hypotheticals involving a “very-high but not abstraction-breaking” number of shrimps are a useful tool for discussion/rhetoric. It allows to establish agreement/disagreement on “shrimp experiences have inherent value at all”, a relatively simple question that could serve as a foundation for discussing other, more complicated and contextual ones. (Such as “how much should I value shrimp experiences?” or “but do enough shrimps actually exist to add up to more than a human?” or “but is Intervention X to which I’m asked to donate $5 going to actually prevent five dollars’ worth of shrimp suffering?”.)
Like, I think having a policy of always allowing abstraction breaks would just impoverish the set of thought experiments we would be able consider and use as tools. Tons of different dilemmas would collapse to Pascal’s mugging or whatever.
Like, I would feel more sympathetic to this simplification if the author of the post was a hardcore naive utilitarian, but they self-identify as a kantian. Kantianism is a highly contextual ethical theory that clearly cares about a bunch of different details of the shrimp, so I don’t get the sense the author wants us to abstract away everything but some supposed “happiness qualia” or “suffering qualia” from the shrimp.
Hmm… I think this paragraph at the beginning is what primed me to parse it this way:
Merriam-Webster defines torture as “the infliction of intense pain (as from burning, crushing, or wounding) to punish, coerce, or afford sadistic pleasure.” So I remind the reader that it is part of the second thought experiment that the shrimp are sentient.
Why would we need this assumption[1], if the hypothetical weren’t centrally about the inherent value of the shrimps/shrimp qualia, and the idea that it adds up? The rest of that essay also features no discussion of the contextual value that the existence of a shrimp injects into various diverse environments in which it exists, etc. It just throws the big number around, while comparing the value of shrimps to the value of eating a bag of skittles, after having implicitly justified shrimps having value via shrimps having qualia.
I suppose it’s possible that if I had the full context of the author’s writing in mind, your interpretation would have been obviously correct[2]. But the essay itself reads the opposite way to me.
Why would we need this assumption[1], if the hypothetical weren’t centrally about the inherent value of the shrimps/shrimp qualia, and the idea that it adds up? The rest of that essay also features no discussion of the contextual value that the existence of a shrimp injects into various diverse environments in which it exists, etc. It just throws the big number around, while comparing the value of shrimps to the value of eating a bag of skittles, after having implicitly justified shrimps having value via shrimps having qualia.
I agree probably I implied a bit too much contextualization. Like, I agree the post has a utilitarian bend, but man, I just really don’t buy the whole “let’s add up qualia” as any basis of moral calculation, that I find attempts at trying to create a “pure qualia shrimp” about as confused and meaningless as trying to argue that 7 bees are more important than a human. “qualia” isn’t a thing that exists. The only thing that exists are your values in all of their complexity and godshatteredness. You can’t make a “pure qualia shrimp”, it doesn’t many any philosophical sense, pure qualia isn’t real.
And I agree that maybe the post was imagining some pure qualia juice, and I don’t know, maybe in that case it makes sense to dismiss it by doing a reductio ad absurdum on qualia juice, but I don’t currently buy it. I think that both wouldn’t be engaging with the good parts of the author, and also be kind of a bad step in the discourse (like, the previous step was understanding why it doesn’t make sense for 7 bees to be more important than a human, for a lot of different reasons and very robustly and within that discourse, it’s actually quite important to understand why 10^100 shrimp might actually be more important than a human, under at least a lot of reasonable set of assumptions).
I just really don’t buy the whole “let’s add up qualia” as any basis of moral calculation
Same, honestly. To me, many of these thought experiments seem decoupled from anything practically relevant. But it still seems to me that people often do argue from those abstracted-out frames I’d outlined, and these arguments are probably sometimes useful for establishing at least some agreement on ethics. (I’m not sure how a full-complexity godshatter-on-godshatter argument would even look like (a fistfight, maybe?), and am very skeptical it’d yield any useful results.)
Anyway, it sounds like we mostly figured out what the initial drastic disconnect between our views here was caused by?
Yeah, that’s more what I had in mind. Illusion of transparency, I suppose.
Certainly, and it’s an important property of reality. But I don’t think this is what extreme hypotheticals such as the one under discussion actually want to talk about (even if you think this is a more important question to focus on)?
Like, my model is that the 10100 shrimp in this hypothetical are not meant to literally be 10100 shrimp. They’re meant to be "10100" “shrimp”. Intuitively, this is meant to stand for something like “a number of shrimp large enough for any value you’re assigning them to become morally relevant”. My interpretation is that the purpose of using a crazy-large number is to elicit that preference with certainty, even if it’s epsilon; not to invite a discussion about qualitative changes in the nature of crazy-large quantities of arbitrary matter.
The hypothetical is interested in shrimp welfare. If we take the above consideration into account, it stops being about “shrimp” at all (see the shrimps-to-rocks move). The abstractions within which the hypothetical is meant to live break.
And yes, if we’re talking about a physical situation involving the number 10100, the abstractions in question really do break under forces this strong, and we have to navigate the situation with the broken abstractions. But in thought-experiment land, we can artificially stipulate those abstractions inviolable (or replace the crazy-high abstraction-breaking number with a very-high but non-abstraction-breaking number).
I agree that this is a thing people often like to invoke, but it feels to me a lot like people talking about billionaires and not noticing the classical crazy arithmetic errors like:
Like, in those discussions people are almost always trying to invoke numbers like “$1 trillion” as “a number so big that the force of the conclusion must be inevitable”, but like most of the time they just fail because the number isn’t big enough.
If someone was like “man, are you really that confident that a shrimp does not have morally relevant experience that you wouldn’t trade a human for a million shrimp?”, my response is “nope, sorry, 1 million isn’t big enough, that’s just really not that big of a number”. But if you give me a number a trillion trillion trillion trillion trillion trillion trillion trillion times bigger, IDK, yeah, that is a much bigger number.
And correspondingly, for every thought experiment of this kind, I do think there is often a number that will just rip through your assumptions and your tradeoffs. There are just really very very very big numbers.
Like, sure, we all agree our abstraction break here, and I am not confident you can’t find any hardening of abstraction that make the tradeoff come out in the direction of the size of the number really absolutely not mattering at all, but I think that would be a violation of the whole point of the exercise. Like, clearly we can agree that we assign a non-zero value to a marginal shrimp. We value that marginal shrimp for a lot of different reasons, but like, you probably value it for reasons that does include things like the richness of its internal experience, and the degree to which it differs from other shrimp, and the degree to which it contributes to an ecosystem, and the degree to which it’s an interesting object of trade, and all kinds of reasons. Now, if we want to extrapolate that value to 10^100, those things still are there, we can’t just start ignoring them.
Like, I would feel more sympathetic to this simplification if the author of the post was a hardcore naive utilitarian, but they self-identify as a kantian. Kantianism is a highly contextual ethical theory that clearly cares about a bunch of different details of the shrimp, so I don’t get the sense the author wants us to abstract away everything but some supposed “happiness qualia” or “suffering qualia” from the shrimp.
Isn’t it the opposite? It’s a defence against providing too-low numbers, it’s specifically to ensure that even infinitesimally small preferences are elicited with certainty.
Bundling up all “this seems like a lot” numbers into the same mental bucket, and then failing to recognize when a real number is not actually as high as in your hypothetical, is certainly an error one could make here. But I don’t see an exact correspondence...
In the billionaires case, a thought-experimenter may invoke the hypothetical of “if a wealthy person had enough money to lift everyone out of poverty while still remaining rich, wouldn’t them not doing so be outrageous?”, while inviting the audience to fill-in the definitions of “enough money” and “poverty”. Practical situations might then just fail to match that hypothetical, and innumerate people might fail to recognize that, yes. But this doesn’t mean that that hypothetical is fundamentally useless to reason about, or that it can’t be used to study some specific intuitions/disagreements. (“But there are no rich people with so much money!” kind of maps to “but I did have breakfast!”.)
And in the shrimps case, hypotheticals involving a “very-high but not abstraction-breaking” number of shrimps are a useful tool for discussion/rhetoric. It allows to establish agreement/disagreement on “shrimp experiences have inherent value at all”, a relatively simple question that could serve as a foundation for discussing other, more complicated and contextual ones. (Such as “how much should I value shrimp experiences?” or “but do enough shrimps actually exist to add up to more than a human?” or “but is Intervention X to which I’m asked to donate $5 going to actually prevent five dollars’ worth of shrimp suffering?”.)
Like, I think having a policy of always allowing abstraction breaks would just impoverish the set of thought experiments we would be able consider and use as tools. Tons of different dilemmas would collapse to Pascal’s mugging or whatever.
Hmm… I think this paragraph at the beginning is what primed me to parse it this way:
Why would we need this assumption[1], if the hypothetical weren’t centrally about the inherent value of the shrimps/shrimp qualia, and the idea that it adds up? The rest of that essay also features no discussion of the contextual value that the existence of a shrimp injects into various diverse environments in which it exists, etc. It just throws the big number around, while comparing the value of shrimps to the value of eating a bag of skittles, after having implicitly justified shrimps having value via shrimps having qualia.
I suppose it’s possible that if I had the full context of the author’s writing in mind, your interpretation would have been obviously correct[2]. But the essay itself reads the opposite way to me.
A pretty strong one, I think, since “are shrimp qualia of nonzero moral relevance?” is often the very point of many discussions.
Indeed, failing to properly familiarize myself with the discourse and the relevant frames before throwing in hot takes was my main blunder here.
I agree probably I implied a bit too much contextualization. Like, I agree the post has a utilitarian bend, but man, I just really don’t buy the whole “let’s add up qualia” as any basis of moral calculation, that I find attempts at trying to create a “pure qualia shrimp” about as confused and meaningless as trying to argue that 7 bees are more important than a human. “qualia” isn’t a thing that exists. The only thing that exists are your values in all of their complexity and godshatteredness. You can’t make a “pure qualia shrimp”, it doesn’t many any philosophical sense, pure qualia isn’t real.
And I agree that maybe the post was imagining some pure qualia juice, and I don’t know, maybe in that case it makes sense to dismiss it by doing a reductio ad absurdum on qualia juice, but I don’t currently buy it. I think that both wouldn’t be engaging with the good parts of the author, and also be kind of a bad step in the discourse (like, the previous step was understanding why it doesn’t make sense for 7 bees to be more important than a human, for a lot of different reasons and very robustly and within that discourse, it’s actually quite important to understand why 10^100 shrimp might actually be more important than a human, under at least a lot of reasonable set of assumptions).
Same, honestly. To me, many of these thought experiments seem decoupled from anything practically relevant. But it still seems to me that people often do argue from those abstracted-out frames I’d outlined, and these arguments are probably sometimes useful for establishing at least some agreement on ethics. (I’m not sure how a full-complexity godshatter-on-godshatter argument would even look like (a fistfight, maybe?), and am very skeptical it’d yield any useful results.)
Anyway, it sounds like we mostly figured out what the initial drastic disconnect between our views here was caused by?
Yeah, I think so, though not sure. But I feel good stopping here.