The probability of us being wiped out by badly done AI is at least at 20%. I agree. The assumption of risks from AI is by itself reasonable. But I am skeptical of making complex predictions that are based on that assumption. I am skpetical of calculating the expected utility of mitigating risks from AI according to the utility associated with its logical implications.
I’ll readily concede that my exact species extinction numbers were made up. But does it really matter? Two hundred million years from now, the children’s children’s children of humanity in their galaxy-civilizations, are unlikely to look back and say, “You know, in retrospect, it really would have been worth not colonizing the Herculus supercluster if only we could have saved 80% of species instead of 20%”. I don’t think they’ll spend much time fretting about it at all, really. It is really incredibly hard to make the consequentialist utilitarian case here, as opposed to the warm-fuzzies case.
I don’t disagree that friendly AI research is currently a better option for charitable giving than charities concerned with environmental problems. Yet I have a hard time to accept that discounting the extinction of most species on the basis of the expected utility of colonizing the Herculus supercluster is sensible.
If you want to convince people like Holden Karnofsky and John Baez then you have to show that risks from AI are more likely than they believe and that contributing to SI can make a difference. If you just argue in terms of logical implications then they will continue to frame SI in terms of Pascal’s mugging.
...you ought to be able to exhibit a particular premise or reasoning step that you disagree with.
I can’t. I can only voice my discomfort. And according to your posts on the Lifespan Dilemma and Pascal’s mugging you share that discomfort, yet you are also unable to pinpoint a certain step that you disagree with.
The probability of us being wiped out by badly done AI is at least at 20%. I agree. The assumption of risks from AI is by itself reasonable. But I am skeptical of making complex predictions that are based on that assumption. I am skpetical of calculating the expected utility of mitigating risks from AI according to the utility associated with its logical implications.
Take your following comment:
I don’t disagree that friendly AI research is currently a better option for charitable giving than charities concerned with environmental problems. Yet I have a hard time to accept that discounting the extinction of most species on the basis of the expected utility of colonizing the Herculus supercluster is sensible.
If you want to convince people like Holden Karnofsky and John Baez then you have to show that risks from AI are more likely than they believe and that contributing to SI can make a difference. If you just argue in terms of logical implications then they will continue to frame SI in terms of Pascal’s mugging.
I can’t. I can only voice my discomfort. And according to your posts on the Lifespan Dilemma and Pascal’s mugging you share that discomfort, yet you are also unable to pinpoint a certain step that you disagree with.