But it’s a little worrying when a community widely shares a strong belief in doom while implying that the required arguments are esoteric and require lots of subtle claims, each of which might have counterarguments, but which overall will eventually convince you. 1a3orn has a good essay about this: https://1a3orn.com/sub/essays-ai-doom-invincible.html.
I wrote a post on that exact selection effect, and there’s an even trickier problem where results are heavy tailed, meaning that a small, insular smart group reaching the correct conclusions is basically indistinguishable from a small, insular smart group reaching the wrong conclusion but believing it’s true due to selection effects plus unconscious selection effects towards weaker arguments, at least without very expensive experiments or access to ground truth.
I wrote a post on that exact selection effect, and there’s an even trickier problem where results are heavy tailed, meaning that a small, insular smart group reaching the correct conclusions is basically indistinguishable from a small, insular smart group reaching the wrong conclusion but believing it’s true due to selection effects plus unconscious selection effects towards weaker arguments, at least without very expensive experiments or access to ground truth.
Here’s an EA Forum version of the post.