Some “superpowers” are an existential risk, such as designing a virus that quickly kills all humans.
Some “superpowers” are horrifying on a System 1 level, but not actually an existential risk; like building dozen giant robots that will stomp on buildings and destroy a few cities, but will ultimately be destroyed by an army.
If we get lucky and the AI develops and uses the latter kind of “superpowers” first, we might get scared before we die.
Some “superpowers” are an existential risk, such as designing a virus that quickly kills all humans.
Some “superpowers” are horrifying on a System 1 level, but not actually an existential risk; like building dozen giant robots that will stomp on buildings and destroy a few cities, but will ultimately be destroyed by an army.
If we get lucky and the AI develops and uses the latter kind of “superpowers” first, we might get scared before we die.