Most of the argument can be boiled down to a simple syllogism: the superior intelligence is always in control; as soon as AI is more intelligent than we are, we are no longer in control.
Seems right to me. And it’s a helpful distillation.
When we think about Western empires or alien invasions, what makes one side superior is not raw intelligence, but the results of that intelligence compounded over time, in the form of science, technology, infrastructure, and wealth. Similarly, an unaided human is no match for most animals. AI, no matter how intelligent, will not start out with a compounding advantage.
Similarly, will we really have no ability to learn from mistakes? One of the prophets’ worries is “fast takeoff”, the idea that AI progress could go from ordinary to godlike literally overnight (perhaps through “recursive self-improvement”). But in reality, we seem to be seeing a “slow takeoff,” as some form of AI has arrived and we actually have time to talk and worry about it (even though Eliezer claims that fast takeoff has not yet been invalidated).
If some rogue AI were to plot against us, would it actually succeed on the first try? Even genius humans generally don’t succeed on the first try of everything they do. The prophets think that AI can deduce its way to victory—the same way they think they can deduce their way to predicting such outcomes.
I’m not seeing how this is conceptually distinct from the existing takeoff concept.
Aren’t science, technology, infrastructure, and wealth merely intelligence + time (+ matter)?
And compounding, too, is just intelligence + time, no?
And whether the rogue AI succeeds on its first attempt at a takeover just depends on its intelligence level at that time is, right? Like a professional chess player will completely dominate me in a chess match on their first try because our gap in chess intelligence is super large. But that pro chess player competing against their adjacently-ranked competitor won’t likely result in such a dominating outcome, right?
I’m failing to see how you’ve changed the terms of the argument?
Is it just that you think slow takeoff is more likely?
Chess is a simple game and a professional chess player has played it many, many times. The first time a professional plays you is not their “first try” at chess.
Acting in the (messy, complicated) real world is different.
Seems right to me. And it’s a helpful distillation.
I’m not seeing how this is conceptually distinct from the existing takeoff concept.
Aren’t science, technology, infrastructure, and wealth merely intelligence + time (+ matter)?
And compounding, too, is just intelligence + time, no?
And whether the rogue AI succeeds on its first attempt at a takeover just depends on its intelligence level at that time is, right? Like a professional chess player will completely dominate me in a chess match on their first try because our gap in chess intelligence is super large. But that pro chess player competing against their adjacently-ranked competitor won’t likely result in such a dominating outcome, right?
I’m failing to see how you’ve changed the terms of the argument?
Is it just that you think slow takeoff is more likely?
Chess is a simple game and a professional chess player has played it many, many times. The first time a professional plays you is not their “first try” at chess.
Acting in the (messy, complicated) real world is different.