If there’s an AGI within, say, 10 years and it mostly keeps the world recognizable so there are still researchers to have opinions, does that resolve as “never” or according to whether the AGI wants them to be convinced? Because if the latter, I expect that they will in hindsight be convinced that we should have paid more attention to safety. If the former, I submit that his prior doesn’t change. If the latter, I submit that the entire prior is moved 10 years to the left (and the first 10 years cut off, and then renormalize along the y axis).
Does X agree that there is at least one concern such that we have not yet solved it and we should not build superintelligent AGI until we do solve it?
Note the word “superintelligent.” This question would not resolve as “never” if the consensus specified in the question is reached after AGI is built (but before superintelligent AGI is built). Rohin Shah notes something similar in his comment:
even if we build human-level reasoning before a majority is reached, the question could still resolve positively after that, since human-level reasoning != AI researchers are out of a job
Unrelatedly, you should probably label your comment “aside.” [edit: I don’t endorse this remark anymore.]
It was meant as a submission, except that I couldn’t be bothered to actually implement my distribution on that website :) - even/especially after superintelligent AI, researchers might come to the conclusion that we weren’t prepared and *shouldn’t* build another—regardless of whether the existing sovereign would allow it.
Not quite. Just look at the prior and draw the vertical line at 2030. Note that you’re incentivizing people to submit their guess as late as possible, both to have time to read other comments yourself and to put your guess right to one side of another.
We’re ok with people posting multiple snapshots, if you want to update it based on later comments! You can edit your comment with a new snapshot link, or add a new comment with the latest snapshot (we’ll consider the latest one, or whichever one you identify as your final submission)
If there’s an AGI within, say, 10 years and it mostly keeps the world recognizable so there are still researchers to have opinions, does that resolve as “never” or according to whether the AGI wants them to be convinced? Because if the latter, I expect that they will in hindsight be convinced that we should have paid more attention to safety. If the former, I submit that his prior doesn’t change. If the latter, I submit that the entire prior is moved 10 years to the left (and the first 10 years cut off, and then renormalize along the y axis).
The third question is
Note the word “superintelligent.” This question would not resolve as “never” if the consensus specified in the question is reached after AGI is built (but before superintelligent AGI is built). Rohin Shah notes something similar in his comment:
Unrelatedly, you should probably label your comment “aside.” [edit: I don’t endorse this remark anymore.]
It was meant as a submission, except that I couldn’t be bothered to actually implement my distribution on that website :) - even/especially after superintelligent AI, researchers might come to the conclusion that we weren’t prepared and *shouldn’t* build another—regardless of whether the existing sovereign would allow it.
Would the latter distribution you described look something like this?
Not quite. Just look at the prior and draw the vertical line at 2030. Note that you’re incentivizing people to submit their guess as late as possible, both to have time to read other comments yourself and to put your guess right to one side of another.
We’re ok with people posting multiple snapshots, if you want to update it based on later comments! You can edit your comment with a new snapshot link, or add a new comment with the latest snapshot (we’ll consider the latest one, or whichever one you identify as your final submission)