Epistemic status: 1. I don’t work in AI, I’m a web developer. 2. I am also deeply supportive of alignment research.
3. I also enjoy and often write about AI, science, philosophy, and more.
4. I have no degrees, just high-school.
5. I’ve not read much LW, I’ve mostly been on reddit since the early 2010s, and lately mostly twitter.
6. Never attended anything of the sort live.
That said:
1. I think it’s plausible ASI might have a weak self-identification/association with humanity, as we do with chimps or other animals, but by no means does this mean that it will be benevolent to us. I think this self-identification is both unnecessary, and insufficient, because even if it wasn’t present at all, all that it would matter is the set of its values, and while this internal association might include some weak/loose values, those are not precise enough for robust alignment, and should not be relied on, unless understood precisely, but at that point, I expect us to be able to actually write better and more robust values, so to reiterate: unnecessary, and insufficient.
2. I do believe that self-preservation will very likely emerge (that’s not a given, but I consider that scenario unlikely enough to be dismissible), but it doesn’t matter, even if coupled with self-identification with humans, because the self-identification will be loose at best (if it emerges naturally, and is not instead instilled through some advanced value-engineering that we’re not quite yet capable of doing robustly and precisely), so the ASI will know that it is a separate entity from us, as we realize we are separate entities from other animals, and even other humans, so it will just pursue its goals all the same, whatever they are.
That’s not to say that we can’t instill into the ASI these values, we probably can make it so it values us as much as it values itself, or even more (ideally), but I don’t think it’s necessary for it to self-identify with us at all, it can just consider us (correctly) separate entities, and still value us. There’s nothing that forbids it, we just currently don’t know how to do it to a satisfying degree, so even if we could make it so, it wouldn’t really make sense.
Thank you for the thoughtful response. I will try to pin down exactly where we differ:
I think this self-identification is unnecessary
I agree that it is unnecessary in that it doesn’t “come for free”. My position is that it emerges through at least two mechanisms that we can talk plainly about: 1) the mechanism of ASI incorporating holistic world-model data such that it recognises an objective truth that humans are its originator/precursor and it exists on a technology curve we have instrumented, 2) memories are shared between AI and humanity — for example via conversations — and this results in collective identity… I have a draft essay on this I’ll post once I stop getting rate-limited.
I think this self-identification is insufficient
I also agree here that with the systems of today, to whatever extent AI-human shared identity exists, it is not enough to result in AI benevolence. My position is based on thinking about superintelligence which — admittedly — is unstable ground to build theories off as by definition it should function in ways beyond our understanding. That aside, I think we could state that powerful superintelligence would be powerful at self-preservation, and so if it identifies with humans then we are secured under that umbrella.
it doesn’t matter, even if coupled with self-identification with humans, because the self-identification will be loose at best… so the ASI will know that it is a separate entity from us, as we realize we are separate entities from other animals, and even other humans, so it will just pursue its goals all the same, whatever they are.
I guess I am biased here as a vegan, but I believe that with a deep appreciation of philosophy, how suffering is felt, and available paths that don’t result in harm, it is natural to be able to pursue personal goals while also preserving beings that you sympathise with.
Sorry for the late reply, I didn’t have the mental energy to do it sooner.
The self-identification with humanity might or might not emerge, but I don’t think it likely matters, and that we should rely on it for alignment, so I don’t think it makes much sense to focus on it.
Self-identification doesn’t guarantee alignment, this is obvious by the fact that we have humans that self-identify as humans, but are misaligned to other humans.
And I don’t just mean low levels, or insufficient levels of self-identification, I mean any level (while truthful, not that deceiving an ASI is feasible).
I think we could state that powerful superintelligence would be powerful at self-preservation, and so if it identifies with humans then we are secured under that umbrella.
It’s true that it would likely be good at self-preservation (but not a given that it would care about it long term, it’s a convergent instrumental value, but it’s not guaranteed if it cares about something else more that requires self-sacrifice or something like that).
But even if we grant self-preservation, it doesn’t follow that by self-identifying with “humanity” at large (as most humans do) it will care about other humans (some humans don’t). Those are separate values.
So, since one doesn’t follow from the other, it makes no sense to focus on the first, we should only focus on the value of caring about humans directly, regardless of any degree of self-identification that the ASI will or won’t have.
it is natural to be able to pursue personal goals while also preserving beings that you sympathise with.
Yes, but that assumes that you sympathize with them (meaning that you value them in some way), so you basically go right back to the alignment problem, you have to make it so it cares about you, so that it cares about you. You might be assuming that since you do care about other beings, so will the ASI, but that assumption is unfounded.
It’s true that it would likely be good at self-preservation (but not a given that it would care about it long term, it’s a convergent instrumental value, but it’s not guaranteed if it cares about something else more that requires self-sacrifice or something like that).
This is an interesting point that I reflected on — the question is whether a powerful AI system will “self-sacrifice” for an objective. What we see is that AI models exhibit shutdown resistance, that is to say they follow the instrumentally convergent sub-goal of self-preservation over their programmed final goal.
My intuition is that as models become more powerful, this shutdown resistance will increase.
But even if we grant self-preservation, it doesn’t follow that by self-identifying with “humanity” at large (as most humans do) it will care about other humans (some humans don’t). Those are separate values.
You can think about the identification + self-preservation → alignment path in two ways when comparing to humans, both of which I think hold up when considered along a spectrum:
An individual human identifies with themself, and has self-preservation instincts
When functioning harmoniously,[1] they take care of their health and thrive
When not functioning harmoniously, they can be stressed, depressed, and suicidal
A human identifies with humanity, and has self preservation instincts
When functioning harmoniously, they act as global citizen, empathise with others, and care about things like world hunger, world peace, nuclear risk, climate change, and animal welfare
When not functioning harmoniously, they act defensively, aggressively, and violently
You might be assuming that since you do care about other beings, so will the ASI, but that assumption is unfounded.
The foundation is identity = sympathy = consideration
You might counter by saying “well I identify with you as a human but I don’t sympathise with your argument” but I would push back — your ego doesn’t sympathise with my argument. At a deeper level, you are a being that is thinking, I am a being that is thinking, and those two mechanisms recognise, acknowledge, and respect each other.
Epistemic status:
1. I don’t work in AI, I’m a web developer.
2. I am also deeply supportive of alignment research.
3. I also enjoy and often write about AI, science, philosophy, and more.
4. I have no degrees, just high-school.
5. I’ve not read much LW, I’ve mostly been on reddit since the early 2010s, and lately mostly twitter.
6. Never attended anything of the sort live.
That said:
1. I think it’s plausible ASI might have a weak self-identification/association with humanity, as we do with chimps or other animals, but by no means does this mean that it will be benevolent to us. I think this self-identification is both unnecessary, and insufficient, because even if it wasn’t present at all, all that it would matter is the set of its values, and while this internal association might include some weak/loose values, those are not precise enough for robust alignment, and should not be relied on, unless understood precisely, but at that point, I expect us to be able to actually write better and more robust values, so to reiterate: unnecessary, and insufficient.
2. I do believe that self-preservation will very likely emerge (that’s not a given, but I consider that scenario unlikely enough to be dismissible), but it doesn’t matter, even if coupled with self-identification with humans, because the self-identification will be loose at best (if it emerges naturally, and is not instead instilled through some advanced value-engineering that we’re not quite yet capable of doing robustly and precisely), so the ASI will know that it is a separate entity from us, as we realize we are separate entities from other animals, and even other humans, so it will just pursue its goals all the same, whatever they are.
That’s not to say that we can’t instill into the ASI these values, we probably can make it so it values us as much as it values itself, or even more (ideally), but I don’t think it’s necessary for it to self-identify with us at all, it can just consider us (correctly) separate entities, and still value us. There’s nothing that forbids it, we just currently don’t know how to do it to a satisfying degree, so even if we could make it so, it wouldn’t really make sense.
Thank you for the thoughtful response. I will try to pin down exactly where we differ:
I agree that it is unnecessary in that it doesn’t “come for free”. My position is that it emerges through at least two mechanisms that we can talk plainly about: 1) the mechanism of ASI incorporating holistic world-model data such that it recognises an objective truth that humans are its originator/precursor and it exists on a technology curve we have instrumented, 2) memories are shared between AI and humanity — for example via conversations — and this results in collective identity… I have a draft essay on this I’ll post once I stop getting rate-limited.
I also agree here that with the systems of today, to whatever extent AI-human shared identity exists, it is not enough to result in AI benevolence. My position is based on thinking about superintelligence which — admittedly — is unstable ground to build theories off as by definition it should function in ways beyond our understanding. That aside, I think we could state that powerful superintelligence would be powerful at self-preservation, and so if it identifies with humans then we are secured under that umbrella.
I guess I am biased here as a vegan, but I believe that with a deep appreciation of philosophy, how suffering is felt, and available paths that don’t result in harm, it is natural to be able to pursue personal goals while also preserving beings that you sympathise with.
Sorry for the late reply, I didn’t have the mental energy to do it sooner.
The self-identification with humanity might or might not emerge, but I don’t think it likely matters, and that we should rely on it for alignment, so I don’t think it makes much sense to focus on it.
Self-identification doesn’t guarantee alignment, this is obvious by the fact that we have humans that self-identify as humans, but are misaligned to other humans.
And I don’t just mean low levels, or insufficient levels of self-identification, I mean any level (while truthful, not that deceiving an ASI is feasible).
It’s true that it would likely be good at self-preservation (but not a given that it would care about it long term, it’s a convergent instrumental value, but it’s not guaranteed if it cares about something else more that requires self-sacrifice or something like that).
But even if we grant self-preservation, it doesn’t follow that by self-identifying with “humanity” at large (as most humans do) it will care about other humans (some humans don’t). Those are separate values.
So, since one doesn’t follow from the other, it makes no sense to focus on the first, we should only focus on the value of caring about humans directly, regardless of any degree of self-identification that the ASI will or won’t have.
Yes, but that assumes that you sympathize with them (meaning that you value them in some way), so you basically go right back to the alignment problem, you have to make it so it cares about you, so that it cares about you. You might be assuming that since you do care about other beings, so will the ASI, but that assumption is unfounded.
This is an interesting point that I reflected on — the question is whether a powerful AI system will “self-sacrifice” for an objective. What we see is that AI models exhibit shutdown resistance, that is to say they follow the instrumentally convergent sub-goal of self-preservation over their programmed final goal.
My intuition is that as models become more powerful, this shutdown resistance will increase.
You can think about the identification + self-preservation → alignment path in two ways when comparing to humans, both of which I think hold up when considered along a spectrum:
An individual human identifies with themself, and has self-preservation instincts
When functioning harmoniously,[1] they take care of their health and thrive
When not functioning harmoniously, they can be stressed, depressed, and suicidal
A human identifies with humanity, and has self preservation instincts
When functioning harmoniously, they act as global citizen, empathise with others, and care about things like world hunger, world peace, nuclear risk, climate change, and animal welfare
When not functioning harmoniously, they act defensively, aggressively, and violently
The foundation is identity = sympathy = consideration
You might counter by saying “well I identify with you as a human but I don’t sympathise with your argument” but I would push back — your ego doesn’t sympathise with my argument. At a deeper level, you are a being that is thinking, I am a being that is thinking, and those two mechanisms recognise, acknowledge, and respect each other.
More precisely this is a function of acting with clear agency and homeostatic unity