If you have a framing of the AI Doom argument that can cause a consensus of super-forecasters (or AI risk skeptics, or literally any group that has an average pDoom<20%) to change their consensus, I would be exceptionally interested in seeing that demonstrated.
Such an argument would be neither bad nor weak, which is precisely the type of argument I have been hoping to find by writing this post.
> Please notice that your position is extremely non-intuitive to basically everyone.
this experiment has been done before.
If you have a framing of the AI Doom argument that can cause a consensus of super-forecasters (or AI risk skeptics, or literally any group that has an average pDoom<20%) to change their consensus, I would be exceptionally interested in seeing that demonstrated.
Such an argument would be neither bad nor weak, which is precisely the type of argument I have been hoping to find by writing this post.
> Please notice that your position is extremely non-intuitive to basically everyone.
Please notice that Manifold both thinks AGI soon and pDoom low.