As I’ve pointed out before, people saying “insects suffer X% as much as humans” or even “there’s a Y% chance that insects are able to suffer” tells you more about the kind of numbers that people pick when picking small numbers, than it tells you about insect suffering. Most people are not good enough at picking appropriately small numbers and just pick something smaller than the numbers they usually see every day. Which isn’t small enough. If they actually picked appropriately sized numbers instead of saying “if there’s even a 1% chance”, you could do the calculations in this article and figure out that insect suffering should be ignored.
The first half of this seems true (the estimates are quite arbitrary), but I don’t get why you’re confident about the second half. What makes your estimate of the “appropriately sized numbers” less arbitrary and more plausible?
Since ordinary people don’t think insect suffering matters, if they pick a number that is high enough that it implies the opposite, it presumptively is too high. This doesn’t prove it’s too high, but if they are bad at picking numbers and they picked a number inconsistent with their other beliefs, we should presume that the number isn’t correct, not that the other belief is incorrect.
I think this depends on the assumptions that a) ordinary people have a considered belief that insect suffering doesn’t matter, and b) this belief depends on the belief that insects don’t suffer (much).
If most people just haven’t given any serious thought to insect suffering, and the main reason they tend to act like it doesn’t matter is because that’s the social default, then their numerical estimates (which are quite arbitrary, but plausibly based on more thought than they’ve ever previously given to the question) might be at least as good a guide to the ground truth as their prior actions are.
And if someone doesn’t care about insect suffering, not because they’re confident that insects don’t experience non-trivial suffering but because they simply don’t care about insects (perhaps because they don’t instinctively feel empathy for insects, they find insects annoying, they know insects spread disease, etc.), then the apparent conflict between their indifference and their estimates is extremely weak evidence against the accuracy of their estimates.
It seems to me that if you go through a reasoning process like what Rethink Priorities did for its moral weights project, then it’s hard to come up with sufficiently small numbers that shrimp welfare looks unimportant.
If you think people are doing a bad job of picking small numbers, then what numbers do you think they should pick instead, and what’s your reasoning?
Rethink Priorities does calculations using made up numbers which, of course, have the same problem. 1% for the likelihood that insects are sentient is absurdly generous.
what numbers do you think they should pick instead
I have no idea. But I know that the ones you have aren’t it.
Obviously the Hard Problem of Consciousness is a thing. Rethink Priorities arrived at its estimates by looking at the limited evidence we do have access to. Given that evidence, it seems to me that you could justify a smaller probability than 1%, but it’s hard to justify a probability so small that insect welfare stops being a relevant concern.
Greater uncertainty about insect consciousness should lead to a larger probability, not a smaller one. This is the same mistake that we complain about AI skeptics making—deep uncertainty about whether AI could kill everyone means you should treat the probability as 50%, not 0%.
By this reasoning, we should treat the chance of AI killing half the world as 50%, and the chance of AI killing 1⁄4 the world as 50%, the chance of either AI or a meteor killing the world as 50%, etc.
And you then have to estimate the chances of electrons or video game characters being sentient. It’s nonzero, right? Maybe electrons only have a 10^-20 chance of being sentient.
I think the probability that electrons are sentient is much higher than 10−20. Nonetheless, that doesn’t convince me that electron well-being matters far more than anything else.
I don’t have an unbounded utility function where I chase extremely small probabilities of extremely big utilities (Pascal’s Mugging).
As I’ve pointed out before, people saying “insects suffer X% as much as humans” or even “there’s a Y% chance that insects are able to suffer” tells you more about the kind of numbers that people pick when picking small numbers, than it tells you about insect suffering. Most people are not good enough at picking appropriately small numbers and just pick something smaller than the numbers they usually see every day. Which isn’t small enough. If they actually picked appropriately sized numbers instead of saying “if there’s even a 1% chance”, you could do the calculations in this article and figure out that insect suffering should be ignored.
The first half of this seems true (the estimates are quite arbitrary), but I don’t get why you’re confident about the second half. What makes your estimate of the “appropriately sized numbers” less arbitrary and more plausible?
Since ordinary people don’t think insect suffering matters, if they pick a number that is high enough that it implies the opposite, it presumptively is too high. This doesn’t prove it’s too high, but if they are bad at picking numbers and they picked a number inconsistent with their other beliefs, we should presume that the number isn’t correct, not that the other belief is incorrect.
I think this depends on the assumptions that a) ordinary people have a considered belief that insect suffering doesn’t matter, and b) this belief depends on the belief that insects don’t suffer (much).
If most people just haven’t given any serious thought to insect suffering, and the main reason they tend to act like it doesn’t matter is because that’s the social default, then their numerical estimates (which are quite arbitrary, but plausibly based on more thought than they’ve ever previously given to the question) might be at least as good a guide to the ground truth as their prior actions are.
And if someone doesn’t care about insect suffering, not because they’re confident that insects don’t experience non-trivial suffering but because they simply don’t care about insects (perhaps because they don’t instinctively feel empathy for insects, they find insects annoying, they know insects spread disease, etc.), then the apparent conflict between their indifference and their estimates is extremely weak evidence against the accuracy of their estimates.
It seems to me that if you go through a reasoning process like what Rethink Priorities did for its moral weights project, then it’s hard to come up with sufficiently small numbers that shrimp welfare looks unimportant.
If you think people are doing a bad job of picking small numbers, then what numbers do you think they should pick instead, and what’s your reasoning?
Rethink Priorities does calculations using made up numbers which, of course, have the same problem. 1% for the likelihood that insects are sentient is absurdly generous.
I have no idea. But I know that the ones you have aren’t it.
Why is 1% absurdly generous?
Obviously the Hard Problem of Consciousness is a thing. Rethink Priorities arrived at its estimates by looking at the limited evidence we do have access to. Given that evidence, it seems to me that you could justify a smaller probability than 1%, but it’s hard to justify a probability so small that insect welfare stops being a relevant concern.
Greater uncertainty about insect consciousness should lead to a larger probability, not a smaller one. This is the same mistake that we complain about AI skeptics making—deep uncertainty about whether AI could kill everyone means you should treat the probability as 50%, not 0%.
By this reasoning, we should treat the chance of AI killing half the world as 50%, and the chance of AI killing 1⁄4 the world as 50%, the chance of either AI or a meteor killing the world as 50%, etc.
And you then have to estimate the chances of electrons or video game characters being sentient. It’s nonzero, right? Maybe electrons only have a 10^-20 chance of being sentient.
I think the probability that electrons are sentient is much higher than 10−20. Nonetheless, that doesn’t convince me that electron well-being matters far more than anything else.
I don’t have an unbounded utility function where I chase extremely small probabilities of extremely big utilities (Pascal’s Mugging).