I don’t think that a single fixed goal is a necessary assumption at all before someone can conclude that WE ARE ALL GONNA DIE!!! Humans don’t have a single fixed terminal goal at all. And yet if you have a 1000 times scaled up human intelligence, even with all the faults and inconsistencies of a human preserved (and magnified), you are still in the world where human survival is at the mercy of the new species whose behavior is completely incomprehensible and unfathomable. What is likely to happen is that the world will lose its predictability to humans and agency, by definition, can only exist in a world that is predictable from the inside. And without agency being possible agents automatically cease to exist.
I don’t think that a single fixed goal is a necessary assumption at all before someone can conclude that WE ARE ALL GONNA DIE!!! Humans don’t have a single fixed terminal goal at all. And yet if you have a 1000 times scaled up human intelligence, even with all the faults and inconsistencies of a human preserved (and magnified), you are still in the world where human survival is at the mercy of the new species whose behavior is completely incomprehensible and unfathomable. What is likely to happen is that the world will lose its predictability to humans and agency, by definition, can only exist in a world that is predictable from the inside. And without agency being possible agents automatically cease to exist.
Yes, but that doesn’t engage with the main argument which seems novel. nostalgebraist doesn’t claim that the AGI couldn’t kill us all if it wanted.