Why would we need this assumption[1], if the hypothetical weren’t centrally about the inherent value of the shrimps/shrimp qualia, and the idea that it adds up? The rest of that essay also features no discussion of the contextual value that the existence of a shrimp injects into various diverse environments in which it exists, etc. It just throws the big number around, while comparing the value of shrimps to the value of eating a bag of skittles, after having implicitly justified shrimps having value via shrimps having qualia.
I agree probably I implied a bit too much contextualization. Like, I agree the post has a utilitarian bend, but man, I just really don’t buy the whole “let’s add up qualia” as any basis of moral calculation, that I find attempts at trying to create a “pure qualia shrimp” about as confused and meaningless as trying to argue that 7 bees are more important than a human. “qualia” isn’t a thing that exists. The only thing that exists are your values in all of their complexity and godshatteredness. You can’t make a “pure qualia shrimp”, it doesn’t many any philosophical sense, pure qualia isn’t real.
And I agree that maybe the post was imagining some pure qualia juice, and I don’t know, maybe in that case it makes sense to dismiss it by doing a reductio ad absurdum on qualia juice, but I don’t currently buy it. I think that both wouldn’t be engaging with the good parts of the author, and also be kind of a bad step in the discourse (like, the previous step was understanding why it doesn’t make sense for 7 bees to be more important than a human, for a lot of different reasons and very robustly and within that discourse, it’s actually quite important to understand why 10^100 shrimp might actually be more important than a human, under at least a lot of reasonable set of assumptions).
I just really don’t buy the whole “let’s add up qualia” as any basis of moral calculation
Same, honestly. To me, many of these thought experiments seem decoupled from anything practically relevant. But it still seems to me that people often do argue from those abstracted-out frames I’d outlined, and these arguments are probably sometimes useful for establishing at least some agreement on ethics. (I’m not sure how a full-complexity godshatter-on-godshatter argument would even look like (a fistfight, maybe?), and am very skeptical it’d yield any useful results.)
Anyway, it sounds like we mostly figured out what the initial drastic disconnect between our views here was caused by?
I agree probably I implied a bit too much contextualization. Like, I agree the post has a utilitarian bend, but man, I just really don’t buy the whole “let’s add up qualia” as any basis of moral calculation, that I find attempts at trying to create a “pure qualia shrimp” about as confused and meaningless as trying to argue that 7 bees are more important than a human. “qualia” isn’t a thing that exists. The only thing that exists are your values in all of their complexity and godshatteredness. You can’t make a “pure qualia shrimp”, it doesn’t many any philosophical sense, pure qualia isn’t real.
And I agree that maybe the post was imagining some pure qualia juice, and I don’t know, maybe in that case it makes sense to dismiss it by doing a reductio ad absurdum on qualia juice, but I don’t currently buy it. I think that both wouldn’t be engaging with the good parts of the author, and also be kind of a bad step in the discourse (like, the previous step was understanding why it doesn’t make sense for 7 bees to be more important than a human, for a lot of different reasons and very robustly and within that discourse, it’s actually quite important to understand why 10^100 shrimp might actually be more important than a human, under at least a lot of reasonable set of assumptions).
Same, honestly. To me, many of these thought experiments seem decoupled from anything practically relevant. But it still seems to me that people often do argue from those abstracted-out frames I’d outlined, and these arguments are probably sometimes useful for establishing at least some agreement on ethics. (I’m not sure how a full-complexity godshatter-on-godshatter argument would even look like (a fistfight, maybe?), and am very skeptical it’d yield any useful results.)
Anyway, it sounds like we mostly figured out what the initial drastic disconnect between our views here was caused by?
Yeah, I think so, though not sure. But I feel good stopping here.