The company thinks, wow this is amazing, we can justify spending way more than $10 billion on something like this.
They don’t bother with any algorithmic improvements or anything, they just run the same training but with $1 trillion instead. (Maybe they get a big loan.)
The $1 trillion AI is superintelligent.
The $1 trillion AI kills everyone.
Thus there is no period of recursive self-improvement, you just go from human-level to dead in a single step.
This scenario depends on some assumptions that seem kinda unlikely to me, but not crazy unlikely. I want to hear other people’s thoughts.
[Question] How concerned are you about a fast takeoff due to a leap in hardware usage?
I am imagining a scenario like:
A company spends $10 billion training an AI.
The AI has fully human-level capabilities.
The company thinks, wow this is amazing, we can justify spending way more than $10 billion on something like this.
They don’t bother with any algorithmic improvements or anything, they just run the same training but with $1 trillion instead. (Maybe they get a big loan.)
The $1 trillion AI is superintelligent.
The $1 trillion AI kills everyone.
Thus there is no period of recursive self-improvement, you just go from human-level to dead in a single step.
This scenario depends on some assumptions that seem kinda unlikely to me, but not crazy unlikely. I want to hear other people’s thoughts.