GiveWell: OK. Well that’s where I stand—I accept a lot of the controversial premises of your mission, but I’m a pretty long way from sold that you have the right team or the right approach. Now some have argued to me that I don’t need to be sold—that even at an infinitesimal probability of success, your project is worthwhile. I see that as a Pascal’s Mugging and don’t accept it; I wouldn’t endorse your project unless it passed the basic hurdles of credibility and workable approach as well as potentially astronomically beneficial goal.
Could this be part of the reason why Eliezer Yudkowsky wrote that the SIAI is only a worthwhile charity if the odds of being wiped out by AI are larger than 1%?
And I don’t think the odds of us being wiped out by badly done AI are small. I think they’re easily larger than 10%. And if you can carry a qualitative argument that the probability is under, say, 1%, then that means AI is probably the wrong use of marginal resources – not because global warming is more important, of course, but because other ignored existential risks like nanotech would be more important. I am not trying to play burden-of-proof tennis. If the chances are under 1%, that’s low enough, we’ll drop the AI business from consideration until everything more realistic has been handled.
When people buy insurance, they often plan for events that are less probable than 1%. The intuitive difficulty here is not that you act on an event with probability of 1%, but that you act on an event where the probability (be it 1% or 10% or 0.1%) is estimated intuitively, so that you have no frequency statistics to rely on, and there remains great uncertainty about the value of the probability.
People fear acting on uncertainty that is about to be resolved, for if it’s resolved not in their favor, they will be faced with wide agreement that in retrospect their action was wrong. Furthermore, if the action is aimed to mitigate an improbable risk, they even expect that the uncertainty will resolve not in their favor. But this consideration doesn’t make the estimated probability any lower, and estimation is the best we have.
The analogy with insurance isn’t exact. One could argue (though I think one would be wrong) that diminishing returns related to bounded utility start setting in on scales larger than the kinds of events people typically insure against, but smaller than whatever fraction of astronomical waste justifies investing in combating 1% existential risk probabilities.
This problem is the reason for most of the headache that LW is causing me and I appreciate any attention it receives.
Me too. Would vote you up twice if I could.
Even mathematicians like John Baez are troubled by the unbounded maximization of expected utility.
I don’t think he mentioned “unbounded” in the post you’re citing. He talked about risk aversion, and that can be encoded by changing the utility function.
The SIAI seems to be progressing slowly. It is difficult to see how their “trust us” approach will get anywhere. The plan of writing code in secret in a basement looks pretty crazy to me. On the more positive side, they do have some money and some attention.
...but overall—why consider the possibility of the SIAI taking over the world? That is not looking as though it is too likely an outcome.
This problem is the reason for most of the headache that LW is causing me and I appreciate any attention it receives.
Note that when GiveWell, a charity evaluation service, interviewed the SIAI, they hinted at the possibility that one could consider the SIAI to be a sort of Pascal’s Mugging:
Could this be part of the reason why Eliezer Yudkowsky wrote that the SIAI is only a worthwhile charity if the odds of being wiped out by AI are larger than 1%?
Even mathematicians like John Baez are troubled by the unbounded maximization of expected utility.
Could it be that we do not have bounded utility but rather only accept a limited degree of uncertainty?
When people buy insurance, they often plan for events that are less probable than 1%. The intuitive difficulty here is not that you act on an event with probability of 1%, but that you act on an event where the probability (be it 1% or 10% or 0.1%) is estimated intuitively, so that you have no frequency statistics to rely on, and there remains great uncertainty about the value of the probability.
People fear acting on uncertainty that is about to be resolved, for if it’s resolved not in their favor, they will be faced with wide agreement that in retrospect their action was wrong. Furthermore, if the action is aimed to mitigate an improbable risk, they even expect that the uncertainty will resolve not in their favor. But this consideration doesn’t make the estimated probability any lower, and estimation is the best we have.
The analogy with insurance isn’t exact. One could argue (though I think one would be wrong) that diminishing returns related to bounded utility start setting in on scales larger than the kinds of events people typically insure against, but smaller than whatever fraction of astronomical waste justifies investing in combating 1% existential risk probabilities.
Me too. Would vote you up twice if I could.
I don’t think he mentioned “unbounded” in the post you’re citing. He talked about risk aversion, and that can be encoded by changing the utility function.
The SIAI seems to be progressing slowly. It is difficult to see how their “trust us” approach will get anywhere. The plan of writing code in secret in a basement looks pretty crazy to me. On the more positive side, they do have some money and some attention.
...but overall—why consider the possibility of the SIAI taking over the world? That is not looking as though it is too likely an outcome.