I completely agree with the intent of this post. These are all important issues SI should officially answer. (Edit: SI’s official reply is here.) Here are some of my thoughts:
I completely agree with objection 1. I think SI should look into doing exactly as you say. I also feel that friendliness has a very high failure chance and that all SI can accomplish is a very low marginal decrease in existential risk. However, I feel this is the result of existential risk being so high and difficult to overcome (Great Filter) rather than SI being so ineffective. As such, for them to engage this objection is to admit defeatism and millenialism, and so they put it out of mind since they need motivation to keep soldiering on despite the sure defeat.
Objection 2 is interesting, though you define AGI differently, as you say. Some points against it: Only one AGI needs to be in agent mode to realize existential risk, even if there are already billions of tool-AIs running safely. Tool-AI seems closer in definition to narrow AI, which you point out we already have lots of, and are improving. It’s likely that very advanced tool-AIs will indeed be the first to achieve some measure of AGI capability. SI uses AGI to mean agent-AI precisely because at some point someone will move beyond narrow/tool-AI into agent-AI. AGI doesn’t “have to be an agent”, but there will likely be agent-AI at some point. I don’t see a means to limit all AGI to tool-AI in perpetuity.
‘Race for power’ should be expanded to ‘incentivised agent-AI’. There exist great incentives to create agent-AI above tool-AI, since AGI will be tireless, ever watchful, supremely faster, smarter, its answers not necessarily understood, etc. These include economic incentives, military incentives, etc., not even to implement-first, but to be better/faster on practical everyday events.
Objection 3, I mostly agree. Though should tool-AIs achieve such power, they can be used as weapons to realize existential risk, similar to nuclear, chemical, bio-, and nanotechnological advances.
I think this post focuses too much on “Friendliness theory”. As Zack_M_Davis stated, SIAI should have more appropriately been called “The Singularity Institute For or Against Artificial Intelligence Depending on Which Seems to Be a Better Idea Upon Due Consideration”. Friendliness is one word which could encapsulate a basket of possible outcomes, and they’re agile enough to change position should it be shown to be necessary, as some of your comments request. Maybe SI should make tool-AI a clear stepping stone to friendliness, or at least a clear possible avenue worth exploring. Agreed.
Much agreed re: feedback loops.
“Kind of organization”: painful but true.
However, I don’t think that “Cause X is the one I care about and Organization Y is the only one working on it” to be a good reason to support Organization Y. For donors determined to donate within this cause, I encourage you to consider donating to a donor-advised fund while making it clear that you intend to grant out the funds to existential-risk-reduction-related organizations in the future. (One way to accomplish this would be to create a fund with “existential risk” in the name; this is a fairly easy thing to do and one person could do it on behalf of multiple donors.) For one who accepts my arguments about SI, I believe withholding funds in this way is likely to be better for SI’s mission than donating to SI—through incentive effects alone (not to mention my specific argument that SI’s approach to “Friendliness” seems likely to increase risks).
Good advice; I’ll look into doing this. One reason I’ve been donating to them is so they can keep the lights on long enough to see and heed this kind of criticism. Maybe those incentives weren’t appropriate.
This post limits my desire to donate additional money to SI beyond previous commitments. I consider it a landmark in SI criticism. Thank you for engaging this very important topic.
Edit: After SI’s replies and careful consideration, I decided to continue donating directly to them, as they have a very clear roadmap for improvement and still represent the best value in existential risk reduction.
You’re an accomplished and proficient philanthropist; if you do make steps in the direction of a donor-directed existential risk fund, I’d like to see them written about.
I am unable to respond to people responding to my previous comment directly; the system tells me ‘Replies to downvoted comments are discouraged. You don’t have the requisite 5 Karma points to proceed.’ So I will reply here.
@Salemicus
My question was indeed rhetorical. My comment was intended as a brief reality check, not a sophisticated argument. I disagree with you about the importance of climate change and resource shortage, and the effectiveness of humanitarian aid. But my comment did not intend to supply any substantial list of “causes”; again, it was a reality check. Its intention was to provoke reflection on how supposedly solid reasoning had justified donating to stop an almost absurdly Sci-Fi armageddon. I will now, briefly, respond to your points on the causes I raised. The following is, again, not a sophisticated and scientifically literate argument, but then neither was your reply to my comment. It probably isn’t worth responding to.
On global warming, I do not wish to engage in a lengthy argument over a complicated scientific matter. Rather I will recommend reading the first major economic impact analysis, the ‘Stern Review on the Economics of Climate Change’. You can find that easily by searching google. For comments and criticisms of that paper, see:
Dietz, S and N Stern (2008), ’Why Economic Analysis Supports Strong Action on Climate Change: a Response to the Stern Review’s Critics, Review of Environmental Economics and Policy, 2(1), 94-113.
Broome, J (May, 2008), ‘The Ethics of Climate Change: Pay Now or Pay More Later?’ Scientific American, May, 2008.
On renewable resources, I think it is rather obviously stupid to induce ‘we’ve never run out of resources before, so we can’t be doing so now!’. I don’t know what condor eggs are or what renewable resources we have run out of. I also fail to see why economists would be in a special position to tell us whether we are running out of resources.
On humanitarian causes, I fail to see how humanitarian aid is counter-productive. Perhaps you meant aid to developing countries (which I agree is a complex, although not at all hopeless issue). I meant aid in times of catastrophes such as natural disasters or wars.
@gjm
Again, I was not intending to provide a sophisticated argument. I only intended to supply a basic reality check. Again, this response to you will not be sophisticated or scientifically literate, and is probably not worth responding to.
Indeed, it is highly reasonable to give to multiple charities. Under doubt over which charities are the “best” (assuming such a concept makes sense), it may well be reasonable to donate to multiple charities. My brief reality check was not meant to say that donating to MIRI was not the best way to spend money, but rather it was absurd to even consider, given the other far more pressing and realistic problems in the world today.
You seem to assume that MIRI would be an effective organisation to prevent evil AIs running around and killing everybody, if such a threat actually existed. I’m not interested in a sophisticated argument over the performance of MIRI, but I think its worth bringing up that tenuous assumption.
You also seem to make some kind of Pascal’s Wager. This is rather strange. We could say there is a (very) low probability, perhaps very low indeed, that climate change messes up all our ecosystems so much that we can no longer farm food. Then we’d all die slowly of starvation. Or perhaps there’s a very low probability that the sun flares to such an extent that life on the earth is wiped out. Ought we invest in flare guarding equipment? Perhaps there’s a tiny probability aliens come and kill us all, but that the same aliens die if they think about blue cheese. Ought we erect monuments to the mighty Stilton around the world?
Allow me to generalize: Don’t take anything too seriously. (By definition of “too”.)
I don’t (at all) assume that MIRI would in fact be effective in preventing disastrous-AI scenarios. I think that’s an open question, and in the very article we’re commenting on we can see that Holden Karnofsky of GiveWell gave the matter some thought and decided that MIRI’s work is probably counterproductive overall in that respect. (Some time ago; MIRI and/or HK’s opinions may have changed relevantly since then.) As I already mentioned, I do not myself donate to MIRI; I was trying to answer the question “why would anyone who isn’t crazy or stupid denote to MIRI?” and I think it’s reasonably clear that someone neither crazy nor stupid could decide that MIRI’s work does help to reduce the risk of AI-induced disaster.
(“Evil AIs running around and killing everybody”, though, is a curious choice of phrasing. It seems to fit much better with any number of rather silly science fiction movies than with anything MIRI and its supporters are actually arguing might happen. Which suggests that either you haven’t grasped what it is they are worried about, or you have grasped it but prefer inaccurate mockery to engagement—which is, of course, your inalienable right, but may not encourage people here to take your comments as seriously as you might prefer.)
I wasn’t intending to make a Pascal’s wager. Again, I am not myself a MIRI donor, but my understanding is that those who are generally think that the probability of AI-induced disaster is not very small. So the point isn’t that there’s this tiny probability of a huge disaster so we multiply (say) a 10^-6 chance of disaster by billions of lives lost and decide that we have to act urgently. It’s that (for the MIRI donor) there’s maybe a 10% -- or a 99% -- chance of AI-induced disaster if we aren’t super-careful, and they hope MIRI can substantially reduce that.
other far more pressing and realistic problems in the world today
The underlying argument here is—if I’m understanding right—something like this: “We know that there are people starving in Africa right now. We fear that there might some time in the future be danger from superintelligent artificial intelligences whose goals don’t match ours. We should always prioritize known, present problems over future, uncertain ones. So it’s silly to expend any effort worrying about AI.” I disagree with the premise I’ve emphasized there. Consider global warming: it probably isn’t doing us much harm yet; although the skeptics/deniers are probably wrong, it’s not altogether impossible; so trying to deal with global warming also falls into the category of future, uncertain threats—and yet this was your first example of something that should obviously be given priority over AI safety.
I guess (but please correct me if I guess wrong) your response would be that the danger of AI is much much lower-probability than the danger of global warming. (Because the probability of producing AI at all is small, or because the probability of getting a substantially superhuman AI is small, or because a substantially superhuman AI would be very unlikely to do any harm, or whatever.) You might be right. How sure are you that you’re right, and why?
Extremely tiny probabilities with enormous utilities attached do suffer from Pascal’s Mugging-type scenario’s. That being said, AI-risk probabilities are much larger in my estimate than the sorts of probabilities required for Pascal-type problems to start coming into play. Unless Perrr333 intends to suggest that probabilities involving UFAI really are that small, I think it’s unlikely he/she is actually making any sort of logical argument. It’s far more likely, I think, that he/she is making an argument based on incredulity (disguised by seemingly logical arguments, but still at its core motivated by incredulity).
The problem with that, of course, is that arguments from incredulity rely almost exclusively on intuition, and the usefulness of intuition decreases spectacularly as scenarios become more esoteric and further removed from the realm of everyday experience.
How can anyone seriously consider the hypothetical threat of AIs running around a worthier cause than stopping global warming, or investing in renewable resources, or preventing/relieving humanitarian crises?
Another datapoint to compare and contrast with Salemicus’s (our political positions are very different):
Like Salemicus, I am not very optimistic that you’re actually asking a serious question with the intention of listening to the answers; if you are, you might want to reconsider how your writing comes across.
I think it’s perfectly possible, and reasonable, to be concerned about more than one issue at a time.
There is an argument that charitable giving, unless you’re giving far more than most of us are in a position to give, should all be directed to the single best cause you can find. I am not a donor to MIRI because I don’t think it’s the single best cause I can find. If you’re asking why people give money to MIRI then maybe someone else will answer that.
I think all the three things you list are important. (In particular, unlike Salemicus I think there are things we can do that will reduce global warming and be of net benefit in other respects; I agree with Salemicus that we are unlikely to completely run out of (say) oil, but think it very possible that the price might become very high and that this could hurt us a lot; and I strongly disagree with him in thinking that attempts to deal with humanitarian crises are typically harmful.)
AI safety is less likely to be a problem than any of them, but (with low probability) could be a worse problem than any of them.
In particular, there are improbable-feeling scenarios in which it’s a huuuuuge catastrophe. These tend to feel “silly” simply because they involve things happening that are far outside the range of what we’re familiar with, but consideration of how (say) Shakespeare might have reacted to some features of present-day technology suggests to me that this isn’t a very reliable guide.
In any case, these scenarios are interesting to think about even if they end up not being a problem. (They might end up not being a problem because they have been thought about. This would not be a bad outcome.)
In the slim chance that your question is non-rhetorical:
Many people do not consider global warming to be a problem. Others think that there is nothing useful to be done about it. Personally I do not consider global warming to be a serious threat; people will adapt fairly easily to temperature changes within the likely ranges. Further, any realistic ‘cure’ for global warming would almost certainly be worse than the disease. Therefore I do not view climate change activism to be a worthy cause at present, although that could change.
History and economics both suggest that so-called non-renewable resources are in fact very robust. Mankind has never run out of any non-renewable resource, whereas we have run out of many renewable ones. The fact that a resource has a hypothetical ‘renewability’ does not necessarily have much impact on the limits to its use. For instance, we need to worry far less about running out of coal than condor eggs. I view most investment in renewable resources as pure boondoggling, and pretty much the opposite of a worthy cause.
Preventing and relieving humanitarian crises can be a worthy cause in principle. But in practice activism along those lines seems heavily counterproductive. I often wonder how many fewer crises there would be if
So basically, I don’t think MIRI is likely to do much good in the world. But I’d much rather donate to them rather than Greenpeace, Solyndra or Oxfam, because at least they’re not actively doing harm.
I completely agree with the intent of this post. These are all important issues SI should officially answer. (Edit: SI’s official reply is here.) Here are some of my thoughts:
I completely agree with objection 1. I think SI should look into doing exactly as you say. I also feel that friendliness has a very high failure chance and that all SI can accomplish is a very low marginal decrease in existential risk. However, I feel this is the result of existential risk being so high and difficult to overcome (Great Filter) rather than SI being so ineffective. As such, for them to engage this objection is to admit defeatism and millenialism, and so they put it out of mind since they need motivation to keep soldiering on despite the sure defeat.
Objection 2 is interesting, though you define AGI differently, as you say. Some points against it: Only one AGI needs to be in agent mode to realize existential risk, even if there are already billions of tool-AIs running safely. Tool-AI seems closer in definition to narrow AI, which you point out we already have lots of, and are improving. It’s likely that very advanced tool-AIs will indeed be the first to achieve some measure of AGI capability. SI uses AGI to mean agent-AI precisely because at some point someone will move beyond narrow/tool-AI into agent-AI. AGI doesn’t “have to be an agent”, but there will likely be agent-AI at some point. I don’t see a means to limit all AGI to tool-AI in perpetuity.
‘Race for power’ should be expanded to ‘incentivised agent-AI’. There exist great incentives to create agent-AI above tool-AI, since AGI will be tireless, ever watchful, supremely faster, smarter, its answers not necessarily understood, etc. These include economic incentives, military incentives, etc., not even to implement-first, but to be better/faster on practical everyday events.
Objection 3, I mostly agree. Though should tool-AIs achieve such power, they can be used as weapons to realize existential risk, similar to nuclear, chemical, bio-, and nanotechnological advances.
I think this post focuses too much on “Friendliness theory”. As Zack_M_Davis stated, SIAI should have more appropriately been called “The Singularity Institute For or Against Artificial Intelligence Depending on Which Seems to Be a Better Idea Upon Due Consideration”. Friendliness is one word which could encapsulate a basket of possible outcomes, and they’re agile enough to change position should it be shown to be necessary, as some of your comments request. Maybe SI should make tool-AI a clear stepping stone to friendliness, or at least a clear possible avenue worth exploring. Agreed.
Much agreed re: feedback loops.
“Kind of organization”: painful but true.
Good advice; I’ll look into doing this. One reason I’ve been donating to them is so they can keep the lights on long enough to see and heed this kind of criticism. Maybe those incentives weren’t appropriate.
This post limits my desire to donate additional money to SI beyond previous commitments. I consider it a landmark in SI criticism. Thank you for engaging this very important topic.
Edit: After SI’s replies and careful consideration, I decided to continue donating directly to them, as they have a very clear roadmap for improvement and still represent the best value in existential risk reduction.
You’re an accomplished and proficient philanthropist; if you do make steps in the direction of a donor-directed existential risk fund, I’d like to see them written about.
I am unable to respond to people responding to my previous comment directly; the system tells me ‘Replies to downvoted comments are discouraged. You don’t have the requisite 5 Karma points to proceed.’ So I will reply here.
@Salemicus
My question was indeed rhetorical. My comment was intended as a brief reality check, not a sophisticated argument. I disagree with you about the importance of climate change and resource shortage, and the effectiveness of humanitarian aid. But my comment did not intend to supply any substantial list of “causes”; again, it was a reality check. Its intention was to provoke reflection on how supposedly solid reasoning had justified donating to stop an almost absurdly Sci-Fi armageddon. I will now, briefly, respond to your points on the causes I raised. The following is, again, not a sophisticated and scientifically literate argument, but then neither was your reply to my comment. It probably isn’t worth responding to.
On global warming, I do not wish to engage in a lengthy argument over a complicated scientific matter. Rather I will recommend reading the first major economic impact analysis, the ‘Stern Review on the Economics of Climate Change’. You can find that easily by searching google. For comments and criticisms of that paper, see:
Weitzman, M (2007), ‘The Stern Review of the Economics of Climate Change’, Journal of Economic Literature 45(3), 703-24. http://www.economics.harvard.edu/faculty/weitzman/files/review_of_stern_review_jel.45.3.pdf
Dasgupta, P (2007), ‘Comments on the Stern Review’s Economics of Climate Change’, National Institute Economic Review 199, 4-7. http://are.berkeley.edu/courses/ARE263/fall2008/paper/Discounting/Dasgupta_Commentary%20-%20The%20Stern%20Review%20s%20Economics%20of%20Climate%20Change_NIES07.pdf
Dietz, S and N Stern (2008), ’Why Economic Analysis Supports Strong Action on Climate Change: a Response to the Stern Review’s Critics, Review of Environmental Economics and Policy, 2(1), 94-113.
Broome, J (May, 2008), ‘The Ethics of Climate Change: Pay Now or Pay More Later?’ Scientific American, May, 2008.
On renewable resources, I think it is rather obviously stupid to induce ‘we’ve never run out of resources before, so we can’t be doing so now!’. I don’t know what condor eggs are or what renewable resources we have run out of. I also fail to see why economists would be in a special position to tell us whether we are running out of resources.
On humanitarian causes, I fail to see how humanitarian aid is counter-productive. Perhaps you meant aid to developing countries (which I agree is a complex, although not at all hopeless issue). I meant aid in times of catastrophes such as natural disasters or wars.
@gjm
Again, I was not intending to provide a sophisticated argument. I only intended to supply a basic reality check. Again, this response to you will not be sophisticated or scientifically literate, and is probably not worth responding to.
Indeed, it is highly reasonable to give to multiple charities. Under doubt over which charities are the “best” (assuming such a concept makes sense), it may well be reasonable to donate to multiple charities. My brief reality check was not meant to say that donating to MIRI was not the best way to spend money, but rather it was absurd to even consider, given the other far more pressing and realistic problems in the world today.
You seem to assume that MIRI would be an effective organisation to prevent evil AIs running around and killing everybody, if such a threat actually existed. I’m not interested in a sophisticated argument over the performance of MIRI, but I think its worth bringing up that tenuous assumption.
You also seem to make some kind of Pascal’s Wager. This is rather strange. We could say there is a (very) low probability, perhaps very low indeed, that climate change messes up all our ecosystems so much that we can no longer farm food. Then we’d all die slowly of starvation. Or perhaps there’s a very low probability that the sun flares to such an extent that life on the earth is wiped out. Ought we invest in flare guarding equipment? Perhaps there’s a tiny probability aliens come and kill us all, but that the same aliens die if they think about blue cheese. Ought we erect monuments to the mighty Stilton around the world?
Don’t take this comment too seriously.
Allow me to generalize: Don’t take anything too seriously. (By definition of “too”.)
I don’t (at all) assume that MIRI would in fact be effective in preventing disastrous-AI scenarios. I think that’s an open question, and in the very article we’re commenting on we can see that Holden Karnofsky of GiveWell gave the matter some thought and decided that MIRI’s work is probably counterproductive overall in that respect. (Some time ago; MIRI and/or HK’s opinions may have changed relevantly since then.) As I already mentioned, I do not myself donate to MIRI; I was trying to answer the question “why would anyone who isn’t crazy or stupid denote to MIRI?” and I think it’s reasonably clear that someone neither crazy nor stupid could decide that MIRI’s work does help to reduce the risk of AI-induced disaster.
(“Evil AIs running around and killing everybody”, though, is a curious choice of phrasing. It seems to fit much better with any number of rather silly science fiction movies than with anything MIRI and its supporters are actually arguing might happen. Which suggests that either you haven’t grasped what it is they are worried about, or you have grasped it but prefer inaccurate mockery to engagement—which is, of course, your inalienable right, but may not encourage people here to take your comments as seriously as you might prefer.)
I wasn’t intending to make a Pascal’s wager. Again, I am not myself a MIRI donor, but my understanding is that those who are generally think that the probability of AI-induced disaster is not very small. So the point isn’t that there’s this tiny probability of a huge disaster so we multiply (say) a 10^-6 chance of disaster by billions of lives lost and decide that we have to act urgently. It’s that (for the MIRI donor) there’s maybe a 10% -- or a 99% -- chance of AI-induced disaster if we aren’t super-careful, and they hope MIRI can substantially reduce that.
The underlying argument here is—if I’m understanding right—something like this: “We know that there are people starving in Africa right now. We fear that there might some time in the future be danger from superintelligent artificial intelligences whose goals don’t match ours. We should always prioritize known, present problems over future, uncertain ones. So it’s silly to expend any effort worrying about AI.” I disagree with the premise I’ve emphasized there. Consider global warming: it probably isn’t doing us much harm yet; although the skeptics/deniers are probably wrong, it’s not altogether impossible; so trying to deal with global warming also falls into the category of future, uncertain threats—and yet this was your first example of something that should obviously be given priority over AI safety.
I guess (but please correct me if I guess wrong) your response would be that the danger of AI is much much lower-probability than the danger of global warming. (Because the probability of producing AI at all is small, or because the probability of getting a substantially superhuman AI is small, or because a substantially superhuman AI would be very unlikely to do any harm, or whatever.) You might be right. How sure are you that you’re right, and why?
?
Extremely tiny probabilities with enormous utilities attached do suffer from Pascal’s Mugging-type scenario’s. That being said, AI-risk probabilities are much larger in my estimate than the sorts of probabilities required for Pascal-type problems to start coming into play. Unless Perrr333 intends to suggest that probabilities involving UFAI really are that small, I think it’s unlikely he/she is actually making any sort of logical argument. It’s far more likely, I think, that he/she is making an argument based on incredulity (disguised by seemingly logical arguments, but still at its core motivated by incredulity).
The problem with that, of course, is that arguments from incredulity rely almost exclusively on intuition, and the usefulness of intuition decreases spectacularly as scenarios become more esoteric and further removed from the realm of everyday experience.
How can anyone seriously consider the hypothetical threat of AIs running around a worthier cause than stopping global warming, or investing in renewable resources, or preventing/relieving humanitarian crises?
Another datapoint to compare and contrast with Salemicus’s (our political positions are very different):
Like Salemicus, I am not very optimistic that you’re actually asking a serious question with the intention of listening to the answers; if you are, you might want to reconsider how your writing comes across.
I think it’s perfectly possible, and reasonable, to be concerned about more than one issue at a time.
There is an argument that charitable giving, unless you’re giving far more than most of us are in a position to give, should all be directed to the single best cause you can find. I am not a donor to MIRI because I don’t think it’s the single best cause I can find. If you’re asking why people give money to MIRI then maybe someone else will answer that.
I think all the three things you list are important. (In particular, unlike Salemicus I think there are things we can do that will reduce global warming and be of net benefit in other respects; I agree with Salemicus that we are unlikely to completely run out of (say) oil, but think it very possible that the price might become very high and that this could hurt us a lot; and I strongly disagree with him in thinking that attempts to deal with humanitarian crises are typically harmful.)
AI safety is less likely to be a problem than any of them, but (with low probability) could be a worse problem than any of them.
In particular, there are improbable-feeling scenarios in which it’s a huuuuuge catastrophe. These tend to feel “silly” simply because they involve things happening that are far outside the range of what we’re familiar with, but consideration of how (say) Shakespeare might have reacted to some features of present-day technology suggests to me that this isn’t a very reliable guide.
In any case, these scenarios are interesting to think about even if they end up not being a problem. (They might end up not being a problem because they have been thought about. This would not be a bad outcome.)
In the slim chance that your question is non-rhetorical:
Many people do not consider global warming to be a problem. Others think that there is nothing useful to be done about it. Personally I do not consider global warming to be a serious threat; people will adapt fairly easily to temperature changes within the likely ranges. Further, any realistic ‘cure’ for global warming would almost certainly be worse than the disease. Therefore I do not view climate change activism to be a worthy cause at present, although that could change.
History and economics both suggest that so-called non-renewable resources are in fact very robust. Mankind has never run out of any non-renewable resource, whereas we have run out of many renewable ones. The fact that a resource has a hypothetical ‘renewability’ does not necessarily have much impact on the limits to its use. For instance, we need to worry far less about running out of coal than condor eggs. I view most investment in renewable resources as pure boondoggling, and pretty much the opposite of a worthy cause.
Preventing and relieving humanitarian crises can be a worthy cause in principle. But in practice activism along those lines seems heavily counterproductive. I often wonder how many fewer crises there would be if
So basically, I don’t think MIRI is likely to do much good in the world. But I’d much rather donate to them rather than Greenpeace, Solyndra or Oxfam, because at least they’re not actively doing harm.