Does Nate Soares believe that successionists aren’t real?
I went to a Q&A where every time a successionist argument was brought up he seemed to insist that person was confused.
eg. About the two meanings of “good”: It’s “good” to believe 2+2=4 and it’s “good” to save a child from a burning building. It seemed to me like the successionist-questioning person he was arguing with understood him just fine, but he kept insisting they didn’t understand anyway. I brought up a very, very clear tweet from a successionist who definitely wasn’t confused and Nate still insisted they were confused.
I don’t really believe successionists are real in the sense of “people who would reflectively endorse giving control over the future to AI systems right now”.
Even if you have weird preferences about AI ultimately being better than humanity, it seems really very convergently insane to make that succession happen now in an uncontrolled way. If you want humanity to be replaced by AI systems, first put yourself in a position where you can steer that transition.
“people who would reflectively endorse giving control over the future to AI systems right now”.
“right now” is doing a lot of work there. I don’t believe there are any (ok, modulo the lizardman constant) who reflectively want any likely singularity-level change right now. I do believe there is a very wide range of believable far-mode preferences and acceptances of the trajectory of intelligent life beyond the immediate.
It’s actually “reflectively endorse” that is doing even more work, I think. Think about the people who fall in love with AI, who believe they have made AI conscious by prompting it correctly, etc. Of course there are people who think that AI right now should replace humanity.
Does Nate Soares believe that successionists aren’t real?
I went to a Q&A where every time a successionist argument was brought up he seemed to insist that person was confused.
eg. About the two meanings of “good”: It’s “good” to believe 2+2=4 and it’s “good” to save a child from a burning building. It seemed to me like the successionist-questioning person he was arguing with understood him just fine, but he kept insisting they didn’t understand anyway. I brought up a very, very clear tweet from a successionist who definitely wasn’t confused and Nate still insisted they were confused.
I don’t really believe successionists are real in the sense of “people who would reflectively endorse giving control over the future to AI systems right now”.
Even if you have weird preferences about AI ultimately being better than humanity, it seems really very convergently insane to make that succession happen now in an uncontrolled way. If you want humanity to be replaced by AI systems, first put yourself in a position where you can steer that transition.
“right now” is doing a lot of work there. I don’t believe there are any (ok, modulo the lizardman constant) who reflectively want any likely singularity-level change right now. I do believe there is a very wide range of believable far-mode preferences and acceptances of the trajectory of intelligent life beyond the immediate.
It’s actually “reflectively endorse” that is doing even more work, I think. Think about the people who fall in love with AI, who believe they have made AI conscious by prompting it correctly, etc. Of course there are people who think that AI right now should replace humanity.