Which is why I said a log-normal prior would be more reasonable.
Why a log-normal prior with mu = 0? Why not some other value for the location parameter? Log-normal makes pretty strong assumptions, which aren’t justified if we for all practical purposes we have no information about the feedback constant.
How much information do we have? We know that we haven’t managed to build an AI in 40 years, and that’s about it.
We may have little specific information about AIs, but we have tons of information about feedback laws, and some information about self-improving systems in general*. I agree that it can be tricky to convert this information to a probability, but that just seems to be an argument against using probabilities in general. Whatever makes it hard to arrive at a good posterior should also make it hard to arrive at a good prior.
(I’m being slightly vague here for the purpose of exposition. I can make these statements more precise if you prefer.)
(* See for instance the Yudkowsky-Hanson AI Foom Debate.)
Why a log-normal prior with mu = 0? Why not some other value for the location parameter? Log-normal makes pretty strong assumptions, which aren’t justified if we for all practical purposes we have no information about the feedback constant.
We may have little specific information about AIs, but we have tons of information about feedback laws, and some information about self-improving systems in general*. I agree that it can be tricky to convert this information to a probability, but that just seems to be an argument against using probabilities in general. Whatever makes it hard to arrive at a good posterior should also make it hard to arrive at a good prior.
(I’m being slightly vague here for the purpose of exposition. I can make these statements more precise if you prefer.)
(* See for instance the Yudkowsky-Hanson AI Foom Debate.)