I think few of us in the alignment community are actually in a position to change our minds about whether alignment is worth working on. With a p(doom) of ~35% I think it’s unlikely that arguments alone push me below the ~5% threshold where working on AI misuse, biosecurity, etc. become competitive with alignment. And there are people with p(doom) of >85%.
I have changed my mind and now think some of the core arguments for x-risk don’t go through, so it’s plausible that I go below 5% if there is continued success in alignment-related ML fields and could substantially change my mind from a single conversation.
I have changed my mind and now think some of the core arguments for x-risk don’t go through, so it’s plausible that I go below 5% if there is continued success in alignment-related ML fields and could substantially change my mind from a single conversation.