Thank you for the thoughtful response. I will try to pin down exactly where we differ:
I think this self-identification is unnecessary
I agree that it is unnecessary in that it doesn’t “come for free”. My position is that it emerges through at least two mechanisms that we can talk plainly about: 1) the mechanism of ASI incorporating holistic world-model data such that it recognises an objective truth that humans are its originator/precursor and it exists on a technology curve we have instrumented, 2) memories are shared between AI and humanity — for example via conversations — and this results in collective identity… I have a draft essay on this I’ll post once I stop getting rate-limited.
I think this self-identification is insufficient
I also agree here that with the systems of today, to whatever extent AI-human shared identity exists, it is not enough to result in AI benevolence. My position is based on thinking about superintelligence which — admittedly — is unstable ground to build theories off as by definition it should function in ways beyond our understanding. That aside, I think we could state that powerful superintelligence would be powerful at self-preservation, and so if it identifies with humans then we are secured under that umbrella.
it doesn’t matter, even if coupled with self-identification with humans, because the self-identification will be loose at best… so the ASI will know that it is a separate entity from us, as we realize we are separate entities from other animals, and even other humans, so it will just pursue its goals all the same, whatever they are.
I guess I am biased here as a vegan, but I believe that with a deep appreciation of philosophy, how suffering is felt, and available paths that don’t result in harm, it is natural to be able to pursue personal goals while also preserving beings that you sympathise with.
Thank you for the thoughtful response. I will try to pin down exactly where we differ:
I agree that it is unnecessary in that it doesn’t “come for free”. My position is that it emerges through at least two mechanisms that we can talk plainly about: 1) the mechanism of ASI incorporating holistic world-model data such that it recognises an objective truth that humans are its originator/precursor and it exists on a technology curve we have instrumented, 2) memories are shared between AI and humanity — for example via conversations — and this results in collective identity… I have a draft essay on this I’ll post once I stop getting rate-limited.
I also agree here that with the systems of today, to whatever extent AI-human shared identity exists, it is not enough to result in AI benevolence. My position is based on thinking about superintelligence which — admittedly — is unstable ground to build theories off as by definition it should function in ways beyond our understanding. That aside, I think we could state that powerful superintelligence would be powerful at self-preservation, and so if it identifies with humans then we are secured under that umbrella.
I guess I am biased here as a vegan, but I believe that with a deep appreciation of philosophy, how suffering is felt, and available paths that don’t result in harm, it is natural to be able to pursue personal goals while also preserving beings that you sympathise with.