But if there’s even a 1% chance that they suffer 20% as intensely as we do, then insect suffering is still, in expectation, responsible for nearly all of the world’s extreme suffering.
Suppose there’s an objective morality that we’re subjectively uncertain about. A reasonable prior does not put zero mass on the hypothesis that the literally infinite characters in our stories are moral patients. A reasonable protocol does not therefore let this hypothesis dominate its decisions regardless of evidence. Aggregate the uncertainty in some other way.
A reasonable prior does not put zero mass on the hypothesis that the literally infinite characters in our stories are moral patients. A reasonable protocol does not therefore let this hypothesis dominate its decisions regardless of evidence.
I do agree that we need some distinction in our decision-making for uncertain ethical problems where a simple expected value is the right solution and uncertain ethical problems where the type of the uncertainty requires handling it differently.
And I do agree that insect suffering is deep enough in the territory of fundamental uncertainty that this question needs to be asked.
When you use the example of “the hypothesis that the literally infinite characters in our stories are moral patients”, I could imagine you having several possible aims:
“literally infinite characters” as an example demonstrates that even very wild hypotheses can feel intuitive from some perspective.
“literally infinite characters” is meant an extreme case to clearly point out the kind of uncertainty that needs to be discussed when talking about insect suffering.
“literally infinite characters” is meant as a comparable example to insect suffering
My understanding is that you mean the first two, but not the third?
If there is an objective morality, I also expect an objective method for making decisions under moral uncertainty. Math that is discovered rather than invented does not contain special-case handling.
A reasonable prior puts nonzero mass on any hypothesis its holder can imagine, else they could not be convinced of it. To demonstrate that the content of the hypotheses must not directly touch, I picked a hypothesis that contains an infinity.
So I’d expect that method to naturally handle infinities just like insects or humans, in a way that adds up to normality. As the masses on whether insect lives are net good or net bad oscillate around 10% each, the method shouldn’t pivot on a dime between maximizing and minimizing the number of insects, either.
Even if we’re not full on fanatics—multiplying probability times magnitude in EV calculations--.2% risks are obviously not worth rounding down to zero. A .2% chance that we were torturing 10^18 people would be the worst thing in the world!
I don’t think a child would need log(.2%) bits of evidence to be convinced that story characters matter. I recommend that your aggregation method treat the hypotheses as untrusted user input and therefore bring them to a common format before you let them interact. I see more than one such possible format.
Suppose there’s an objective morality that we’re subjectively uncertain about. A reasonable prior does not put zero mass on the hypothesis that the literally infinite characters in our stories are moral patients. A reasonable protocol does not therefore let this hypothesis dominate its decisions regardless of evidence. Aggregate the uncertainty in some other way.
I do agree that we need some distinction in our decision-making for uncertain ethical problems where a simple expected value is the right solution and uncertain ethical problems where the type of the uncertainty requires handling it differently.
And I do agree that insect suffering is deep enough in the territory of fundamental uncertainty that this question needs to be asked.
When you use the example of “the hypothesis that the literally infinite characters in our stories are moral patients”, I could imagine you having several possible aims:
“literally infinite characters” as an example demonstrates that even very wild hypotheses can feel intuitive from some perspective.
“literally infinite characters” is meant an extreme case to clearly point out the kind of uncertainty that needs to be discussed when talking about insect suffering.
“literally infinite characters” is meant as a comparable example to insect suffering
My understanding is that you mean the first two, but not the third?
If there is an objective morality, I also expect an objective method for making decisions under moral uncertainty. Math that is discovered rather than invented does not contain special-case handling.
A reasonable prior puts nonzero mass on any hypothesis its holder can imagine, else they could not be convinced of it. To demonstrate that the content of the hypotheses must not directly touch, I picked a hypothesis that contains an infinity.
So I’d expect that method to naturally handle infinities just like insects or humans, in a way that adds up to normality. As the masses on whether insect lives are net good or net bad oscillate around 10% each, the method shouldn’t pivot on a dime between maximizing and minimizing the number of insects, either.
Even if we’re not full on fanatics—multiplying probability times magnitude in EV calculations--.2% risks are obviously not worth rounding down to zero. A .2% chance that we were torturing 10^18 people would be the worst thing in the world!
I don’t think a child would need log(.2%) bits of evidence to be convinced that story characters matter. I recommend that your aggregation method treat the hypotheses as untrusted user input and therefore bring them to a common format before you let them interact. I see more than one such possible format.