Maybe some future version of humanity will want to do some handover, but we are very far from the limits of human potential
I think this is conceding too much. Many successionists will jump on this and say “Well, that’s what I’m talking about! I’m not saying AI should take over now, but just that it likely will one day and so we should prepare for that.”
Furthermore, people who don’t want to be succeeded by AI are often not saying this just because they think human potential can be advanced further; that we can become much smarter and wiser. I’d guess that even if we proved somehow that human IQ could never exceed n and n was reached, most would not desire that their lineage of biological descendants gradually dwindle away to zero while AI prospers.
You can say “maybe some future version of humanity will want to X” for any X because it’s hard to prove anything about humanity in the far future. But such reasoning should not play into our current decision-making process unless we think it’s particularly likely that future humanity will want X.
I think this is conceding too much. Many successionists will jump on this and say “Well, that’s what I’m talking about! I’m not saying AI should take over now, but just that it likely will one day and so we should prepare for that.”
Furthermore, people who don’t want to be succeeded by AI are often not saying this just because they think human potential can be advanced further; that we can become much smarter and wiser. I’d guess that even if we proved somehow that human IQ could never exceed n and n was reached, most would not desire that their lineage of biological descendants gradually dwindle away to zero while AI prospers.
You can say “maybe some future version of humanity will want to X” for any X because it’s hard to prove anything about humanity in the far future. But such reasoning should not play into our current decision-making process unless we think it’s particularly likely that future humanity will want X.