I have a biologically hardwired preference for defeating and hurting those who oppose me vigorously. I work very hard to sideline that biologically hardwired preference.
This seems like a very bad analogy, which is misleading in this context. We can usefully distinguish between evolutionarily beneficial instrumental strategies which are no longer adaptive and actively sabotage our other preferences in the modern environment, and preferences that we can preserve without sacrificing other goals.
This seems at best like an overstatement of how bad that analogy is. At worst it’s pointing to an important aspect of the logic here: having an innate drive doesn’t mean consciously endorsing it. There’s a second step.
I think the analogy is actually directly on.
My preference for humans over other types of sapient/sentient beings does conflict with my other goals: furthering happiness/joy in all its embodiments, and furthering diversity of thought and experience. If I want a world full of humans and make no room for sentient AIs, those other goals may well be sacrificed for my preference for humans.
I feel I should clarify once again that I am not a successorist; right now I’d prefer a world with both lots of humans and posthumans, and lots of types of sentient AIs (and non-sentient AIs to do work that nobody sentient wants to do).
But I’m highly uncertain, and merely trying to contribute to the logic of this question.
I haven’t thought about this a ton because I consider it far more pressing to figure out alignment so we can have some measure of control over the future. Non-sentient AI taking over the lightcone is something almost no one wants on reflection, including, I think, most “successorists” who are motivated more by trolling than considered sincere beliefs. Where I’ve followed their logic (eg. Beff Jezos) they have actually indicated that they’re expecting sentient AI and would be unhappy with a future without sentience in some form.
This seems like a very bad analogy, which is misleading in this context. We can usefully distinguish between evolutionarily beneficial instrumental strategies which are no longer adaptive and actively sabotage our other preferences in the modern environment, and preferences that we can preserve without sacrificing other goals.
This seems at best like an overstatement of how bad that analogy is. At worst it’s pointing to an important aspect of the logic here: having an innate drive doesn’t mean consciously endorsing it. There’s a second step.
I think the analogy is actually directly on.
My preference for humans over other types of sapient/sentient beings does conflict with my other goals: furthering happiness/joy in all its embodiments, and furthering diversity of thought and experience. If I want a world full of humans and make no room for sentient AIs, those other goals may well be sacrificed for my preference for humans.
I feel I should clarify once again that I am not a successorist; right now I’d prefer a world with both lots of humans and posthumans, and lots of types of sentient AIs (and non-sentient AIs to do work that nobody sentient wants to do).
But I’m highly uncertain, and merely trying to contribute to the logic of this question.
I haven’t thought about this a ton because I consider it far more pressing to figure out alignment so we can have some measure of control over the future. Non-sentient AI taking over the lightcone is something almost no one wants on reflection, including, I think, most “successorists” who are motivated more by trolling than considered sincere beliefs. Where I’ve followed their logic (eg. Beff Jezos) they have actually indicated that they’re expecting sentient AI and would be unhappy with a future without sentience in some form.