Instead, my opposition to AI successionism comes from a preference toward my own kind. This is hardwired in me from biology. I prefer my family members to randomly-sampled people with similar traits
I have a biologically hardwired preference for defeating and hurting those who oppose me vigorously. I work very hard to sideline that biologically hardwired preference.
To be human is to be more than human.
You and all of us struggle against some of our hardwired impulses while embracing others.
Separately, the wiser among successionist advocates may be imagining a successor for whom they’d feel the same sort of love you feel for your grandchildren. Humans are not limited to loving their biological kin, although we are certainly biased toward loving them more (do ask someone from a really bad family how strong their familial instincts are).
Before I’d even consider accepting some sort of successor, I’d want to meet them and know they spark joy in my heart like seeing a young human playing and learning does. I find it entirely possible that we could create beings who would evoke more love from humans than humans do, becasue they are genuinely more worthy of it.
I have a biologically hardwired preference for defeating and hurting those who oppose me vigorously. I work very hard to sideline that biologically hardwired preference.
This seems like a very bad analogy, which is misleading in this context. We can usefully distinguish between evolutionarily beneficial instrumental strategies which are no longer adaptive and actively sabotage our other preferences in the modern environment, and preferences that we can preserve without sacrificing other goals.
This seems at best like an overstatement of how bad that analogy is. At worst it’s pointing to an important aspect of the logic here: having an innate drive doesn’t mean consciously endorsing it. There’s a second step.
I think the analogy is actually directly on.
My preference for humans over other types of sapient/sentient beings does conflict with my other goals: furthering happiness/joy in all its embodiments, and furthering diversity of thought and experience. If I want a world full of humans and make no room for sentient AIs, those other goals may well be sacrificed for my preference for humans.
I feel I should clarify once again that I am not a successorist; right now I’d prefer a world with both lots of humans and posthumans, and lots of types of sentient AIs (and non-sentient AIs to do work that nobody sentient wants to do).
But I’m highly uncertain, and merely trying to contribute to the logic of this question.
I haven’t thought about this a ton because I consider it far more pressing to figure out alignment so we can have some measure of control over the future. Non-sentient AI taking over the lightcone is something almost no one wants on reflection, including, I think, most “successorists” who are motivated more by trolling than considered sincere beliefs. Where I’ve followed their logic (eg. Beff Jezos) they have actually indicated that they’re expecting sentient AI and would be unhappy with a future without sentience in some form.
I have a biologically hardwired preference for defeating and hurting those who oppose me vigorously. I work very hard to sideline that biologically hardwired preference.
To be human is to be more than human.
You and all of us struggle against some of our hardwired impulses while embracing others.
Separately, the wiser among successionist advocates may be imagining a successor for whom they’d feel the same sort of love you feel for your grandchildren. Humans are not limited to loving their biological kin, although we are certainly biased toward loving them more (do ask someone from a really bad family how strong their familial instincts are).
Before I’d even consider accepting some sort of successor, I’d want to meet them and know they spark joy in my heart like seeing a young human playing and learning does. I find it entirely possible that we could create beings who would evoke more love from humans than humans do, becasue they are genuinely more worthy of it.
This seems like a very bad analogy, which is misleading in this context. We can usefully distinguish between evolutionarily beneficial instrumental strategies which are no longer adaptive and actively sabotage our other preferences in the modern environment, and preferences that we can preserve without sacrificing other goals.
This seems at best like an overstatement of how bad that analogy is. At worst it’s pointing to an important aspect of the logic here: having an innate drive doesn’t mean consciously endorsing it. There’s a second step.
I think the analogy is actually directly on.
My preference for humans over other types of sapient/sentient beings does conflict with my other goals: furthering happiness/joy in all its embodiments, and furthering diversity of thought and experience. If I want a world full of humans and make no room for sentient AIs, those other goals may well be sacrificed for my preference for humans.
I feel I should clarify once again that I am not a successorist; right now I’d prefer a world with both lots of humans and posthumans, and lots of types of sentient AIs (and non-sentient AIs to do work that nobody sentient wants to do).
But I’m highly uncertain, and merely trying to contribute to the logic of this question.
I haven’t thought about this a ton because I consider it far more pressing to figure out alignment so we can have some measure of control over the future. Non-sentient AI taking over the lightcone is something almost no one wants on reflection, including, I think, most “successorists” who are motivated more by trolling than considered sincere beliefs. Where I’ve followed their logic (eg. Beff Jezos) they have actually indicated that they’re expecting sentient AI and would be unhappy with a future without sentience in some form.