I am unable to respond to people responding to my previous comment directly; the system tells me ‘Replies to downvoted comments are discouraged. You don’t have the requisite 5 Karma points to proceed.’ So I will reply here.
@Salemicus
My question was indeed rhetorical. My comment was intended as a brief reality check, not a sophisticated argument. I disagree with you about the importance of climate change and resource shortage, and the effectiveness of humanitarian aid. But my comment did not intend to supply any substantial list of “causes”; again, it was a reality check. Its intention was to provoke reflection on how supposedly solid reasoning had justified donating to stop an almost absurdly Sci-Fi armageddon. I will now, briefly, respond to your points on the causes I raised. The following is, again, not a sophisticated and scientifically literate argument, but then neither was your reply to my comment. It probably isn’t worth responding to.
On global warming, I do not wish to engage in a lengthy argument over a complicated scientific matter. Rather I will recommend reading the first major economic impact analysis, the ‘Stern Review on the Economics of Climate Change’. You can find that easily by searching google. For comments and criticisms of that paper, see:
Dietz, S and N Stern (2008), ’Why Economic Analysis Supports Strong Action on Climate Change: a Response to the Stern Review’s Critics, Review of Environmental Economics and Policy, 2(1), 94-113.
Broome, J (May, 2008), ‘The Ethics of Climate Change: Pay Now or Pay More Later?’ Scientific American, May, 2008.
On renewable resources, I think it is rather obviously stupid to induce ‘we’ve never run out of resources before, so we can’t be doing so now!’. I don’t know what condor eggs are or what renewable resources we have run out of. I also fail to see why economists would be in a special position to tell us whether we are running out of resources.
On humanitarian causes, I fail to see how humanitarian aid is counter-productive. Perhaps you meant aid to developing countries (which I agree is a complex, although not at all hopeless issue). I meant aid in times of catastrophes such as natural disasters or wars.
@gjm
Again, I was not intending to provide a sophisticated argument. I only intended to supply a basic reality check. Again, this response to you will not be sophisticated or scientifically literate, and is probably not worth responding to.
Indeed, it is highly reasonable to give to multiple charities. Under doubt over which charities are the “best” (assuming such a concept makes sense), it may well be reasonable to donate to multiple charities. My brief reality check was not meant to say that donating to MIRI was not the best way to spend money, but rather it was absurd to even consider, given the other far more pressing and realistic problems in the world today.
You seem to assume that MIRI would be an effective organisation to prevent evil AIs running around and killing everybody, if such a threat actually existed. I’m not interested in a sophisticated argument over the performance of MIRI, but I think its worth bringing up that tenuous assumption.
You also seem to make some kind of Pascal’s Wager. This is rather strange. We could say there is a (very) low probability, perhaps very low indeed, that climate change messes up all our ecosystems so much that we can no longer farm food. Then we’d all die slowly of starvation. Or perhaps there’s a very low probability that the sun flares to such an extent that life on the earth is wiped out. Ought we invest in flare guarding equipment? Perhaps there’s a tiny probability aliens come and kill us all, but that the same aliens die if they think about blue cheese. Ought we erect monuments to the mighty Stilton around the world?
Allow me to generalize: Don’t take anything too seriously. (By definition of “too”.)
I don’t (at all) assume that MIRI would in fact be effective in preventing disastrous-AI scenarios. I think that’s an open question, and in the very article we’re commenting on we can see that Holden Karnofsky of GiveWell gave the matter some thought and decided that MIRI’s work is probably counterproductive overall in that respect. (Some time ago; MIRI and/or HK’s opinions may have changed relevantly since then.) As I already mentioned, I do not myself donate to MIRI; I was trying to answer the question “why would anyone who isn’t crazy or stupid denote to MIRI?” and I think it’s reasonably clear that someone neither crazy nor stupid could decide that MIRI’s work does help to reduce the risk of AI-induced disaster.
(“Evil AIs running around and killing everybody”, though, is a curious choice of phrasing. It seems to fit much better with any number of rather silly science fiction movies than with anything MIRI and its supporters are actually arguing might happen. Which suggests that either you haven’t grasped what it is they are worried about, or you have grasped it but prefer inaccurate mockery to engagement—which is, of course, your inalienable right, but may not encourage people here to take your comments as seriously as you might prefer.)
I wasn’t intending to make a Pascal’s wager. Again, I am not myself a MIRI donor, but my understanding is that those who are generally think that the probability of AI-induced disaster is not very small. So the point isn’t that there’s this tiny probability of a huge disaster so we multiply (say) a 10^-6 chance of disaster by billions of lives lost and decide that we have to act urgently. It’s that (for the MIRI donor) there’s maybe a 10% -- or a 99% -- chance of AI-induced disaster if we aren’t super-careful, and they hope MIRI can substantially reduce that.
other far more pressing and realistic problems in the world today
The underlying argument here is—if I’m understanding right—something like this: “We know that there are people starving in Africa right now. We fear that there might some time in the future be danger from superintelligent artificial intelligences whose goals don’t match ours. We should always prioritize known, present problems over future, uncertain ones. So it’s silly to expend any effort worrying about AI.” I disagree with the premise I’ve emphasized there. Consider global warming: it probably isn’t doing us much harm yet; although the skeptics/deniers are probably wrong, it’s not altogether impossible; so trying to deal with global warming also falls into the category of future, uncertain threats—and yet this was your first example of something that should obviously be given priority over AI safety.
I guess (but please correct me if I guess wrong) your response would be that the danger of AI is much much lower-probability than the danger of global warming. (Because the probability of producing AI at all is small, or because the probability of getting a substantially superhuman AI is small, or because a substantially superhuman AI would be very unlikely to do any harm, or whatever.) You might be right. How sure are you that you’re right, and why?
Extremely tiny probabilities with enormous utilities attached do suffer from Pascal’s Mugging-type scenario’s. That being said, AI-risk probabilities are much larger in my estimate than the sorts of probabilities required for Pascal-type problems to start coming into play. Unless Perrr333 intends to suggest that probabilities involving UFAI really are that small, I think it’s unlikely he/she is actually making any sort of logical argument. It’s far more likely, I think, that he/she is making an argument based on incredulity (disguised by seemingly logical arguments, but still at its core motivated by incredulity).
The problem with that, of course, is that arguments from incredulity rely almost exclusively on intuition, and the usefulness of intuition decreases spectacularly as scenarios become more esoteric and further removed from the realm of everyday experience.
I am unable to respond to people responding to my previous comment directly; the system tells me ‘Replies to downvoted comments are discouraged. You don’t have the requisite 5 Karma points to proceed.’ So I will reply here.
@Salemicus
My question was indeed rhetorical. My comment was intended as a brief reality check, not a sophisticated argument. I disagree with you about the importance of climate change and resource shortage, and the effectiveness of humanitarian aid. But my comment did not intend to supply any substantial list of “causes”; again, it was a reality check. Its intention was to provoke reflection on how supposedly solid reasoning had justified donating to stop an almost absurdly Sci-Fi armageddon. I will now, briefly, respond to your points on the causes I raised. The following is, again, not a sophisticated and scientifically literate argument, but then neither was your reply to my comment. It probably isn’t worth responding to.
On global warming, I do not wish to engage in a lengthy argument over a complicated scientific matter. Rather I will recommend reading the first major economic impact analysis, the ‘Stern Review on the Economics of Climate Change’. You can find that easily by searching google. For comments and criticisms of that paper, see:
Weitzman, M (2007), ‘The Stern Review of the Economics of Climate Change’, Journal of Economic Literature 45(3), 703-24. http://www.economics.harvard.edu/faculty/weitzman/files/review_of_stern_review_jel.45.3.pdf
Dasgupta, P (2007), ‘Comments on the Stern Review’s Economics of Climate Change’, National Institute Economic Review 199, 4-7. http://are.berkeley.edu/courses/ARE263/fall2008/paper/Discounting/Dasgupta_Commentary%20-%20The%20Stern%20Review%20s%20Economics%20of%20Climate%20Change_NIES07.pdf
Dietz, S and N Stern (2008), ’Why Economic Analysis Supports Strong Action on Climate Change: a Response to the Stern Review’s Critics, Review of Environmental Economics and Policy, 2(1), 94-113.
Broome, J (May, 2008), ‘The Ethics of Climate Change: Pay Now or Pay More Later?’ Scientific American, May, 2008.
On renewable resources, I think it is rather obviously stupid to induce ‘we’ve never run out of resources before, so we can’t be doing so now!’. I don’t know what condor eggs are or what renewable resources we have run out of. I also fail to see why economists would be in a special position to tell us whether we are running out of resources.
On humanitarian causes, I fail to see how humanitarian aid is counter-productive. Perhaps you meant aid to developing countries (which I agree is a complex, although not at all hopeless issue). I meant aid in times of catastrophes such as natural disasters or wars.
@gjm
Again, I was not intending to provide a sophisticated argument. I only intended to supply a basic reality check. Again, this response to you will not be sophisticated or scientifically literate, and is probably not worth responding to.
Indeed, it is highly reasonable to give to multiple charities. Under doubt over which charities are the “best” (assuming such a concept makes sense), it may well be reasonable to donate to multiple charities. My brief reality check was not meant to say that donating to MIRI was not the best way to spend money, but rather it was absurd to even consider, given the other far more pressing and realistic problems in the world today.
You seem to assume that MIRI would be an effective organisation to prevent evil AIs running around and killing everybody, if such a threat actually existed. I’m not interested in a sophisticated argument over the performance of MIRI, but I think its worth bringing up that tenuous assumption.
You also seem to make some kind of Pascal’s Wager. This is rather strange. We could say there is a (very) low probability, perhaps very low indeed, that climate change messes up all our ecosystems so much that we can no longer farm food. Then we’d all die slowly of starvation. Or perhaps there’s a very low probability that the sun flares to such an extent that life on the earth is wiped out. Ought we invest in flare guarding equipment? Perhaps there’s a tiny probability aliens come and kill us all, but that the same aliens die if they think about blue cheese. Ought we erect monuments to the mighty Stilton around the world?
Don’t take this comment too seriously.
Allow me to generalize: Don’t take anything too seriously. (By definition of “too”.)
I don’t (at all) assume that MIRI would in fact be effective in preventing disastrous-AI scenarios. I think that’s an open question, and in the very article we’re commenting on we can see that Holden Karnofsky of GiveWell gave the matter some thought and decided that MIRI’s work is probably counterproductive overall in that respect. (Some time ago; MIRI and/or HK’s opinions may have changed relevantly since then.) As I already mentioned, I do not myself donate to MIRI; I was trying to answer the question “why would anyone who isn’t crazy or stupid denote to MIRI?” and I think it’s reasonably clear that someone neither crazy nor stupid could decide that MIRI’s work does help to reduce the risk of AI-induced disaster.
(“Evil AIs running around and killing everybody”, though, is a curious choice of phrasing. It seems to fit much better with any number of rather silly science fiction movies than with anything MIRI and its supporters are actually arguing might happen. Which suggests that either you haven’t grasped what it is they are worried about, or you have grasped it but prefer inaccurate mockery to engagement—which is, of course, your inalienable right, but may not encourage people here to take your comments as seriously as you might prefer.)
I wasn’t intending to make a Pascal’s wager. Again, I am not myself a MIRI donor, but my understanding is that those who are generally think that the probability of AI-induced disaster is not very small. So the point isn’t that there’s this tiny probability of a huge disaster so we multiply (say) a 10^-6 chance of disaster by billions of lives lost and decide that we have to act urgently. It’s that (for the MIRI donor) there’s maybe a 10% -- or a 99% -- chance of AI-induced disaster if we aren’t super-careful, and they hope MIRI can substantially reduce that.
The underlying argument here is—if I’m understanding right—something like this: “We know that there are people starving in Africa right now. We fear that there might some time in the future be danger from superintelligent artificial intelligences whose goals don’t match ours. We should always prioritize known, present problems over future, uncertain ones. So it’s silly to expend any effort worrying about AI.” I disagree with the premise I’ve emphasized there. Consider global warming: it probably isn’t doing us much harm yet; although the skeptics/deniers are probably wrong, it’s not altogether impossible; so trying to deal with global warming also falls into the category of future, uncertain threats—and yet this was your first example of something that should obviously be given priority over AI safety.
I guess (but please correct me if I guess wrong) your response would be that the danger of AI is much much lower-probability than the danger of global warming. (Because the probability of producing AI at all is small, or because the probability of getting a substantially superhuman AI is small, or because a substantially superhuman AI would be very unlikely to do any harm, or whatever.) You might be right. How sure are you that you’re right, and why?
?
Extremely tiny probabilities with enormous utilities attached do suffer from Pascal’s Mugging-type scenario’s. That being said, AI-risk probabilities are much larger in my estimate than the sorts of probabilities required for Pascal-type problems to start coming into play. Unless Perrr333 intends to suggest that probabilities involving UFAI really are that small, I think it’s unlikely he/she is actually making any sort of logical argument. It’s far more likely, I think, that he/she is making an argument based on incredulity (disguised by seemingly logical arguments, but still at its core motivated by incredulity).
The problem with that, of course, is that arguments from incredulity rely almost exclusively on intuition, and the usefulness of intuition decreases spectacularly as scenarios become more esoteric and further removed from the realm of everyday experience.