Another datapoint to compare and contrast with Salemicus’s (our political positions are very different):
Like Salemicus, I am not very optimistic that you’re actually asking a serious question with the intention of listening to the answers; if you are, you might want to reconsider how your writing comes across.
I think it’s perfectly possible, and reasonable, to be concerned about more than one issue at a time.
There is an argument that charitable giving, unless you’re giving far more than most of us are in a position to give, should all be directed to the single best cause you can find. I am not a donor to MIRI because I don’t think it’s the single best cause I can find. If you’re asking why people give money to MIRI then maybe someone else will answer that.
I think all the three things you list are important. (In particular, unlike Salemicus I think there are things we can do that will reduce global warming and be of net benefit in other respects; I agree with Salemicus that we are unlikely to completely run out of (say) oil, but think it very possible that the price might become very high and that this could hurt us a lot; and I strongly disagree with him in thinking that attempts to deal with humanitarian crises are typically harmful.)
AI safety is less likely to be a problem than any of them, but (with low probability) could be a worse problem than any of them.
In particular, there are improbable-feeling scenarios in which it’s a huuuuuge catastrophe. These tend to feel “silly” simply because they involve things happening that are far outside the range of what we’re familiar with, but consideration of how (say) Shakespeare might have reacted to some features of present-day technology suggests to me that this isn’t a very reliable guide.
In any case, these scenarios are interesting to think about even if they end up not being a problem. (They might end up not being a problem because they have been thought about. This would not be a bad outcome.)
Another datapoint to compare and contrast with Salemicus’s (our political positions are very different):
Like Salemicus, I am not very optimistic that you’re actually asking a serious question with the intention of listening to the answers; if you are, you might want to reconsider how your writing comes across.
I think it’s perfectly possible, and reasonable, to be concerned about more than one issue at a time.
There is an argument that charitable giving, unless you’re giving far more than most of us are in a position to give, should all be directed to the single best cause you can find. I am not a donor to MIRI because I don’t think it’s the single best cause I can find. If you’re asking why people give money to MIRI then maybe someone else will answer that.
I think all the three things you list are important. (In particular, unlike Salemicus I think there are things we can do that will reduce global warming and be of net benefit in other respects; I agree with Salemicus that we are unlikely to completely run out of (say) oil, but think it very possible that the price might become very high and that this could hurt us a lot; and I strongly disagree with him in thinking that attempts to deal with humanitarian crises are typically harmful.)
AI safety is less likely to be a problem than any of them, but (with low probability) could be a worse problem than any of them.
In particular, there are improbable-feeling scenarios in which it’s a huuuuuge catastrophe. These tend to feel “silly” simply because they involve things happening that are far outside the range of what we’re familiar with, but consideration of how (say) Shakespeare might have reacted to some features of present-day technology suggests to me that this isn’t a very reliable guide.
In any case, these scenarios are interesting to think about even if they end up not being a problem. (They might end up not being a problem because they have been thought about. This would not be a bad outcome.)