It seems to me that if you go through a reasoning process like what Rethink Priorities did for its moral weights project, then it’s hard to come up with sufficiently small numbers that shrimp welfare looks unimportant.
If you think people are doing a bad job of picking small numbers, then what numbers do you think they should pick instead, and what’s your reasoning?
Rethink Priorities does calculations using made up numbers which, of course, have the same problem. 1% for the likelihood that insects are sentient is absurdly generous.
what numbers do you think they should pick instead
I have no idea. But I know that the ones you have aren’t it.
Obviously the Hard Problem of Consciousness is a thing. Rethink Priorities arrived at its estimates by looking at the limited evidence we do have access to. Given that evidence, it seems to me that you could justify a smaller probability than 1%, but it’s hard to justify a probability so small that insect welfare stops being a relevant concern.
Greater uncertainty about insect consciousness should lead to a larger probability, not a smaller one. This is the same mistake that we complain about AI skeptics making—deep uncertainty about whether AI could kill everyone means you should treat the probability as 50%, not 0%.
By this reasoning, we should treat the chance of AI killing half the world as 50%, and the chance of AI killing 1⁄4 the world as 50%, the chance of either AI or a meteor killing the world as 50%, etc.
And you then have to estimate the chances of electrons or video game characters being sentient. It’s nonzero, right? Maybe electrons only have a 10^-20 chance of being sentient.
I think the probability that electrons are sentient is much higher than 10−20. Nonetheless, that doesn’t convince me that electron well-being matters far more than anything else.
I don’t have an unbounded utility function where I chase extremely small probabilities of extremely big utilities (Pascal’s Mugging).
It seems to me that if you go through a reasoning process like what Rethink Priorities did for its moral weights project, then it’s hard to come up with sufficiently small numbers that shrimp welfare looks unimportant.
If you think people are doing a bad job of picking small numbers, then what numbers do you think they should pick instead, and what’s your reasoning?
Rethink Priorities does calculations using made up numbers which, of course, have the same problem. 1% for the likelihood that insects are sentient is absurdly generous.
I have no idea. But I know that the ones you have aren’t it.
Why is 1% absurdly generous?
Obviously the Hard Problem of Consciousness is a thing. Rethink Priorities arrived at its estimates by looking at the limited evidence we do have access to. Given that evidence, it seems to me that you could justify a smaller probability than 1%, but it’s hard to justify a probability so small that insect welfare stops being a relevant concern.
Greater uncertainty about insect consciousness should lead to a larger probability, not a smaller one. This is the same mistake that we complain about AI skeptics making—deep uncertainty about whether AI could kill everyone means you should treat the probability as 50%, not 0%.
By this reasoning, we should treat the chance of AI killing half the world as 50%, and the chance of AI killing 1⁄4 the world as 50%, the chance of either AI or a meteor killing the world as 50%, etc.
And you then have to estimate the chances of electrons or video game characters being sentient. It’s nonzero, right? Maybe electrons only have a 10^-20 chance of being sentient.
I think the probability that electrons are sentient is much higher than 10−20. Nonetheless, that doesn’t convince me that electron well-being matters far more than anything else.
I don’t have an unbounded utility function where I chase extremely small probabilities of extremely big utilities (Pascal’s Mugging).