I think the idea that you are deciding for sufficiently similar minds as well as your own here may help in some way. If you and everyone who thinks like you is trading not-saved humans now for slight increased chance of saved everyone in the future, what would you decide?
(Note, if there are 400 people who think like you, and you’re using multiplicative increases you’ve just increased the chance of success by four orders of magnitude. If you’re using additive increases, you’ve gone over unity. Stick to odds for things like this maybe!)
Well, the marginal impact of a life-not-saved on the probability of a p-sing (can I call it that? What I really want is a convenient short-hand for “tiny incremental increase in the probability of a positive singularity.”) probably goes down as more we put more effort into achieving a p-sing, but not significantly for the problem. The law of diminishing marginal returns gets you every time.
Let’s not get to caught up in the numbers (which I do think are useful for considering a real trade-off). I don’t know how likely a p-sing is, nor how much my efforts can contribute to one. I am interested in analysis of this question, but I don’t think we can have high confidence in an prediction that goes out 20 years or more, especially if the situation requires the introduction of such world-shaping technologies as would lead up to a singularity. If everyone acts as I do, but we’re massively wrong about how much impact our efforts have (which is likely), then we all waste enormous effort on nothing.
Given that you are only one individual, the increase in the chance of a p-sing for each unit of money you give is roughly linear, so diminishing marginal returns shouldn’t be an issue.
I think the idea that you are deciding for sufficiently similar minds as well as your own here may help in some way. If you and everyone who thinks like you is trading not-saved humans now for slight increased chance of saved everyone in the future, what would you decide?
(Note, if there are 400 people who think like you, and you’re using multiplicative increases you’ve just increased the chance of success by four orders of magnitude. If you’re using additive increases, you’ve gone over unity. Stick to odds for things like this maybe!)
Well, the marginal impact of a life-not-saved on the probability of a p-sing (can I call it that? What I really want is a convenient short-hand for “tiny incremental increase in the probability of a positive singularity.”) probably goes down as more we put more effort into achieving a p-sing, but not significantly for the problem. The law of diminishing marginal returns gets you every time.
Let’s not get to caught up in the numbers (which I do think are useful for considering a real trade-off). I don’t know how likely a p-sing is, nor how much my efforts can contribute to one. I am interested in analysis of this question, but I don’t think we can have high confidence in an prediction that goes out 20 years or more, especially if the situation requires the introduction of such world-shaping technologies as would lead up to a singularity. If everyone acts as I do, but we’re massively wrong about how much impact our efforts have (which is likely), then we all waste enormous effort on nothing.
Given that you are only one individual, the increase in the chance of a p-sing for each unit of money you give is roughly linear, so diminishing marginal returns shouldn’t be an issue.