But is it appropriate to be ~98% sure that the ASI level will be achieved in the coming years? If not, then it seems reasonable to allow more uncertainty. To prove that the forecasts are well calibrated, it would be worthwhile to make more verifiable statements. I have often seen claims that Yudkowsky has perfectly calibrated probabilities, but according to his other public forecasts or his page in Manifold, it does not seem so.
I can mostly only speak to my own probabilities, and it depends how many years we count as coming. I’m less than 98% on ASI in the next five years, say. The ~98% is if anyone builds it (using anything remotely like current methods).
But is it appropriate to be ~98% sure that the ASI level will be achieved in the coming years?
If not, then it seems reasonable to allow more uncertainty.
To prove that the forecasts are well calibrated, it would be worthwhile to make more verifiable statements. I have often seen claims that Yudkowsky has perfectly calibrated probabilities, but according to his other public forecasts or his page in Manifold, it does not seem so.
I can mostly only speak to my own probabilities, and it depends how many years we count as coming. I’m less than 98% on ASI in the next five years, say. The ~98% is if anyone builds it (using anything remotely like current methods).