It’s worth noting that a 1 in a million prior of a charity being extraordinarily effective isn’t that unreasonable: there are over 1 million 501(c)(3) organizations in the U.S. alone, and presumably a large fraction of these are charities, and presumably most of them are not extraordinarily effective.
(I’m not claiming that you argue that it is unreasonable, I’m just including the data here for others to refer to.)
If I ask you to guess which of a million programs produces an output that scores highest on some complicated metric, but you don’t know anything about the programs, you’re going to have a one in a million chance of guessing correctly. Given the further information that these three, and only these three, were written with the specific goal of doing well on that metric, and all the others were trying to do well on related but different metrics, and suddenly it’s more likely than not that one of those three does best.
There are very few charities that are trying to be the most efficient from a utilitarian point of view. It’s likely that one of them is.
Ok, but if that’s your reference class, “isn’t a donkey sanctuary” counts as evidence you can update on. It seems there’s large classes of charities we can be confident will not be extraordinarily effective, and these don’t include FHI, MIRI etc.
Yes. There’s a choice as to what to put into the prior and what to put into the likelihood. This makes it more difficult to make claims like “this number is a reasonable prior and this one is not”. Instead, one has to specify the population the prior is about, and this in turn affects what likelihood ratios are reasonable.
It’s worth noting that a 1 in a million prior of a charity being extraordinarily effective isn’t that unreasonable: there are over 1 million 501(c)(3) organizations in the U.S. alone, and presumably a large fraction of these are charities, and presumably most of them are not extraordinarily effective.
(I’m not claiming that you argue that it is unreasonable, I’m just including the data here for others to refer to.)
If I ask you to guess which of a million programs produces an output that scores highest on some complicated metric, but you don’t know anything about the programs, you’re going to have a one in a million chance of guessing correctly. Given the further information that these three, and only these three, were written with the specific goal of doing well on that metric, and all the others were trying to do well on related but different metrics, and suddenly it’s more likely than not that one of those three does best.
There are very few charities that are trying to be the most efficient from a utilitarian point of view. It’s likely that one of them is.
Ok, but if that’s your reference class, “isn’t a donkey sanctuary” counts as evidence you can update on. It seems there’s large classes of charities we can be confident will not be extraordinarily effective, and these don’t include FHI, MIRI etc.
Yes. There’s a choice as to what to put into the prior and what to put into the likelihood. This makes it more difficult to make claims like “this number is a reasonable prior and this one is not”. Instead, one has to specify the population the prior is about, and this in turn affects what likelihood ratios are reasonable.