[Question] How concerned are you about a fast takeoff due to a leap in hardware usage?

I am imagining a scenario like:

  1. A company spends $10 billion training an AI.

  2. The AI has fully human-level capabilities.

  3. The company thinks, wow this is amazing, we can justify spending way more than $10 billion on something like this.

  4. They don’t bother with any algorithmic improvements or anything, they just run the same training but with $1 trillion instead. (Maybe they get a big loan.)

  5. The $1 trillion AI is superintelligent.

  6. The $1 trillion AI kills everyone.

Thus there is no period of recursive self-improvement, you just go from human-level to dead in a single step.

This scenario depends on some assumptions that seem kinda unlikely to me, but not crazy unlikely. I want to hear other people’s thoughts.