And finally, I once again state that I abjure, refute, and disclaim all forms of Pascalian reasoning and multiplying tiny probabilities by large impacts when it comes to existential risk.
“MIRI reduces existential risks by a non-tiny probability. My contribution of $100 would increase the chance of MIRI’s success, however, by only a tiny probability. Still, multiplying this tiny probability increase by the good that would occur if my $100 did end up making the difference justifies my giving $100 to MIRI.”
On a very large scale, if you think FAI stands a serious chance of saving the world, then humanity should dump a bunch of effort into it, and if nobody’s dumping effort into it then you should dump more effort than currently into it. Calculations of marginal impact in POKO/dollar are sensible for comparing two x-risk mitigation efforts in demand of money, but in this case each marginal added dollar is rightly going to account for a very tiny slice of probability, and this is not Pascal’s Wager. Large efforts with a success-or-failure criterion are rightly, justly, and unavoidably going to end up with small marginal probabilities per added unit effort. It would only be Pascal’s Wager if the whole route-to-humanity-being-OK were assigned a tiny probability, and then a large payoff used to shut down further discussion of whether the next unit of effort should go there or to a different x-risk.
Eliezer explains it in his other comment (emphasis mine):
Before scanning, I precommit to renouncing, abjuring, and distancing MIRI from the argument in the video if it argues for no probability higher than 1 in 2000 of FAI saving the world, because I myself do not positively engage in long-term projects on the basis of probabilities that low (though I sometimes avoid doing things for dangers that small). There ought to be at least one x-risk effort with a greater probability of saving the world than this—or if not, you ought to make one. If you know yourself for an NPC and that you cannot start such a project yourself, you ought to throw money at anyone launching a new project whose probability of saving the world is not known to be this small.
So is this a bad reason to give $100 to MIRI:
“MIRI reduces existential risks by a non-tiny probability. My contribution of $100 would increase the chance of MIRI’s success, however, by only a tiny probability. Still, multiplying this tiny probability increase by the good that would occur if my $100 did end up making the difference justifies my giving $100 to MIRI.”
Thanks for answering. I just gave $100 to MIRI.
Eliezer explains it in his other comment (emphasis mine):