I disagree; this might have real world implications. For example, the recent OpenPhil report on Semi-informative Priors for AI timelines updates on the passage of time, but if we model creating AGI as playing Russian roulette*, perhaps one shouldn’t update on the passage of time.
* I.e., AGI in the 2000s might have lead to an existential catastrophe due to underdeveloped safety theory
I disagree; this might have real world implications. For example, the recent OpenPhil report on Semi-informative Priors for AI timelines updates on the passage of time, but if we model creating AGI as playing Russian roulette*, perhaps one shouldn’t update on the passage of time.
* I.e., AGI in the 2000s might have lead to an existential catastrophe due to underdeveloped safety theory
That is not a similar situation. In the AI situation, your risks obviously increase over time.