The probability of unfriendly AI is too low, and the evidence is too “brittle”.
Earlier you said: “People already suck at telling whether Vitamin D is good for you, yet some people seem to believe that they can have non-negligible confidence about the power and behavior of artificial general intelligence.” Now you’re making high confidence claims about AGI. Also, I remind you the discussion started from my criticism of the proposed AGI safety protocols. If there is no UFAI risk than the safety protocols are pointless.
In other words, one may assign 50% probability to “a coin will come up heads” and “there is intelligent life on other planets,” but one’s knowledge about the two scenarios is different in important ways.
Not in ways that have to do with expected utility calculation.
Suppose there are 4 risks. One mundane risk has a probability of 1⁄10 and and you assign 20 utils to its prevention. Another less likely risk has a probability of 1⁄100 but you assign 1000 utils to its prevention. Yet another risk is very unlikely, having a probability of 1/1000, but you assign 1 million utils to its prevention. The fourth risk is extremely unlikely, having a probability of 10^-10000, but you assign 10^10006 to its prevention. All else equal, which one would you choose to prevent and why?
Risk 4 since it corresponds to highest expected utility.
And in case that you would choose risk 4 then do you also give money to a Pascalian mugger?
My utility function is bounded (I think) so you can only Pascal-mug me that much.
And in case you are saying that AI risk is the most probable underfunded risk...
I have no idea whether it is underfunded. I can try to think about it, but it has little to do with the present discussion.
Earlier you said: “People already suck at telling whether Vitamin D is good for you, yet some people seem to believe that they can have non-negligible confidence about the power and behavior of artificial general intelligence.” Now you’re making high confidence claims about AGI. Also, I remind you the discussion started from my criticism of the proposed AGI safety protocols. If there is no UFAI risk than the safety protocols are pointless.
Not in ways that have to do with expected utility calculation.
Risk 4 since it corresponds to highest expected utility.
My utility function is bounded (I think) so you can only Pascal-mug me that much.
I have no idea whether it is underfunded. I can try to think about it, but it has little to do with the present discussion.