Yes, even an AI that has not undergone any recursive self-improvement might be a threat to human survival. I remember Eliezer saying this (a few years ago) but please don’t ask me to find where he says it.
My point is that I see recursive self-improvement often cited as the thing that gets the AI up to the level where it’s powerful enough to kill everyone, which is a characterization I disagree with, and is an important crux, because believing in it makes someone’s timelines shorter.
Yes, even an AI that has not undergone any recursive self-improvement might be a threat to human survival. I remember Eliezer saying this (a few years ago) but please don’t ask me to find where he says it.
My point is that I see recursive self-improvement often cited as the thing that gets the AI up to the level where it’s powerful enough to kill everyone, which is a characterization I disagree with, and is an important crux, because believing in it makes someone’s timelines shorter.