How can anyone seriously consider the hypothetical threat of AIs running around a worthier cause than stopping global warming, or investing in renewable resources, or preventing/relieving humanitarian crises?
Another datapoint to compare and contrast with Salemicus’s (our political positions are very different):
Like Salemicus, I am not very optimistic that you’re actually asking a serious question with the intention of listening to the answers; if you are, you might want to reconsider how your writing comes across.
I think it’s perfectly possible, and reasonable, to be concerned about more than one issue at a time.
There is an argument that charitable giving, unless you’re giving far more than most of us are in a position to give, should all be directed to the single best cause you can find. I am not a donor to MIRI because I don’t think it’s the single best cause I can find. If you’re asking why people give money to MIRI then maybe someone else will answer that.
I think all the three things you list are important. (In particular, unlike Salemicus I think there are things we can do that will reduce global warming and be of net benefit in other respects; I agree with Salemicus that we are unlikely to completely run out of (say) oil, but think it very possible that the price might become very high and that this could hurt us a lot; and I strongly disagree with him in thinking that attempts to deal with humanitarian crises are typically harmful.)
AI safety is less likely to be a problem than any of them, but (with low probability) could be a worse problem than any of them.
In particular, there are improbable-feeling scenarios in which it’s a huuuuuge catastrophe. These tend to feel “silly” simply because they involve things happening that are far outside the range of what we’re familiar with, but consideration of how (say) Shakespeare might have reacted to some features of present-day technology suggests to me that this isn’t a very reliable guide.
In any case, these scenarios are interesting to think about even if they end up not being a problem. (They might end up not being a problem because they have been thought about. This would not be a bad outcome.)
In the slim chance that your question is non-rhetorical:
Many people do not consider global warming to be a problem. Others think that there is nothing useful to be done about it. Personally I do not consider global warming to be a serious threat; people will adapt fairly easily to temperature changes within the likely ranges. Further, any realistic ‘cure’ for global warming would almost certainly be worse than the disease. Therefore I do not view climate change activism to be a worthy cause at present, although that could change.
History and economics both suggest that so-called non-renewable resources are in fact very robust. Mankind has never run out of any non-renewable resource, whereas we have run out of many renewable ones. The fact that a resource has a hypothetical ‘renewability’ does not necessarily have much impact on the limits to its use. For instance, we need to worry far less about running out of coal than condor eggs. I view most investment in renewable resources as pure boondoggling, and pretty much the opposite of a worthy cause.
Preventing and relieving humanitarian crises can be a worthy cause in principle. But in practice activism along those lines seems heavily counterproductive. I often wonder how many fewer crises there would be if
So basically, I don’t think MIRI is likely to do much good in the world. But I’d much rather donate to them rather than Greenpeace, Solyndra or Oxfam, because at least they’re not actively doing harm.
How can anyone seriously consider the hypothetical threat of AIs running around a worthier cause than stopping global warming, or investing in renewable resources, or preventing/relieving humanitarian crises?
Another datapoint to compare and contrast with Salemicus’s (our political positions are very different):
Like Salemicus, I am not very optimistic that you’re actually asking a serious question with the intention of listening to the answers; if you are, you might want to reconsider how your writing comes across.
I think it’s perfectly possible, and reasonable, to be concerned about more than one issue at a time.
There is an argument that charitable giving, unless you’re giving far more than most of us are in a position to give, should all be directed to the single best cause you can find. I am not a donor to MIRI because I don’t think it’s the single best cause I can find. If you’re asking why people give money to MIRI then maybe someone else will answer that.
I think all the three things you list are important. (In particular, unlike Salemicus I think there are things we can do that will reduce global warming and be of net benefit in other respects; I agree with Salemicus that we are unlikely to completely run out of (say) oil, but think it very possible that the price might become very high and that this could hurt us a lot; and I strongly disagree with him in thinking that attempts to deal with humanitarian crises are typically harmful.)
AI safety is less likely to be a problem than any of them, but (with low probability) could be a worse problem than any of them.
In particular, there are improbable-feeling scenarios in which it’s a huuuuuge catastrophe. These tend to feel “silly” simply because they involve things happening that are far outside the range of what we’re familiar with, but consideration of how (say) Shakespeare might have reacted to some features of present-day technology suggests to me that this isn’t a very reliable guide.
In any case, these scenarios are interesting to think about even if they end up not being a problem. (They might end up not being a problem because they have been thought about. This would not be a bad outcome.)
In the slim chance that your question is non-rhetorical:
Many people do not consider global warming to be a problem. Others think that there is nothing useful to be done about it. Personally I do not consider global warming to be a serious threat; people will adapt fairly easily to temperature changes within the likely ranges. Further, any realistic ‘cure’ for global warming would almost certainly be worse than the disease. Therefore I do not view climate change activism to be a worthy cause at present, although that could change.
History and economics both suggest that so-called non-renewable resources are in fact very robust. Mankind has never run out of any non-renewable resource, whereas we have run out of many renewable ones. The fact that a resource has a hypothetical ‘renewability’ does not necessarily have much impact on the limits to its use. For instance, we need to worry far less about running out of coal than condor eggs. I view most investment in renewable resources as pure boondoggling, and pretty much the opposite of a worthy cause.
Preventing and relieving humanitarian crises can be a worthy cause in principle. But in practice activism along those lines seems heavily counterproductive. I often wonder how many fewer crises there would be if
So basically, I don’t think MIRI is likely to do much good in the world. But I’d much rather donate to them rather than Greenpeace, Solyndra or Oxfam, because at least they’re not actively doing harm.