Ok, it sounds like we agree on pretty much everything except what it means for something to “be an existential risk”. I think 0.01% still counts as a risk worth worrying about (or it would, if AI x-risk weren’t multiple orders of magnitude higher).
Ok, it sounds like we agree on pretty much everything except what it means for something to “be an existential risk”. I think 0.01% still counts as a risk worth worrying about (or it would, if AI x-risk weren’t multiple orders of magnitude higher).