One notion that deconfused these sorts of incredibly low probabilities to me is to just do a case split.
Suppose we have a cup of coffee. Probably if you drink it, nothing much happens. But by Cromwell it is conceivable that it was actually planted by an Eldritch trickster god and that if you drink it the Eldritch trickster god will torture 3^^^^^3 people for 100 years.
Now obviously the trickster god scenario is very unlikely, I’d say much less than 1e-1000000 probability. (IMO think we should have at least as many zeros as I used of characters to describe the scenario, but that would be unweildy.) Though for the purpose of this thought experiment, let’s round it to 1e-1000000.
Would it be bad to drink the coffee? Well, if we have linear unbounded utility, we can do the expected utility calculation and get 1e-1000000 * 3^^^^^3 = too big to be even close to acceptable.
But this gives you the expected badness. In reality, either we are in the trickster god scenario, or we are not. If we are not in the trickster god scenario (or any scenario like it), then it’s fine to drink the coffee. If we are in the scenario, then it’s incredibly bad to drink it.
So there’s a small probability that we’d be making a terrible mistake in drinking it, and a large probability that we would be making a minor mistake in not drinking it. Though the trickster god belief probably leads to a bunch of other correlated behaviors that in total would be a big mistake.
So, reordering your life entirely in the service of a utility with probability << 1e-1000000 is probably bad, but it might be good with probability << 1e-1000000, and if you accept unbounded utilities, then that might make it worth it.
One notion that deconfused these sorts of incredibly low probabilities to me is to just do a case split.
Suppose we have a cup of coffee. Probably if you drink it, nothing much happens. But by Cromwell it is conceivable that it was actually planted by an Eldritch trickster god and that if you drink it the Eldritch trickster god will torture 3^^^^^3 people for 100 years.
Now obviously the trickster god scenario is very unlikely, I’d say much less than 1e-1000000 probability. (IMO think we should have at least as many zeros as I used of characters to describe the scenario, but that would be unweildy.) Though for the purpose of this thought experiment, let’s round it to 1e-1000000.
Would it be bad to drink the coffee? Well, if we have linear unbounded utility, we can do the expected utility calculation and get 1e-1000000 * 3^^^^^3 = too big to be even close to acceptable.
But this gives you the expected badness. In reality, either we are in the trickster god scenario, or we are not. If we are not in the trickster god scenario (or any scenario like it), then it’s fine to drink the coffee. If we are in the scenario, then it’s incredibly bad to drink it.
So there’s a small probability that we’d be making a terrible mistake in drinking it, and a large probability that we would be making a minor mistake in not drinking it. Though the trickster god belief probably leads to a bunch of other correlated behaviors that in total would be a big mistake.
So, reordering your life entirely in the service of a utility with probability << 1e-1000000 is probably bad, but it might be good with probability << 1e-1000000, and if you accept unbounded utilities, then that might make it worth it.