For any piece of information (expert statements, benchmarks, surveys, etc.) most good faith people agree on wether it makes doom more or less likely.
But we don’t agree on how much they should move our estimates, and there’s no good way of discussing that.
How to do better?
Thanks for doing these, you have a rare gift
I look forward to your posts on AI every week!