Thank you for writing this. I needed a conceptual handle like this to give shape to an intuition that’s been hanging around for a while.
It seems to me that our current civilizational arrangement is itself poorly aligned or at least prone to generating unaligned subentities. In other words, we have a generalized agent-alignment problem. Asking unaligned non-AI agents to align an AI is a Godzilla strategy and as such work on aligning already-existing entities is instrumental for AI alignment.
(On a side note, I suspect that there’s a lot of overlap between AI alignment and generalized alignment but that’s another argument entirely.)
My mentality as well. We can’t even get corporations to stop polluting. Probably whatever solves AI alignment will also help align our egregores and vice versa.
Thank you for writing this. I needed a conceptual handle like this to give shape to an intuition that’s been hanging around for a while.
It seems to me that our current civilizational arrangement is itself poorly aligned or at least prone to generating unaligned subentities. In other words, we have a generalized agent-alignment problem. Asking unaligned non-AI agents to align an AI is a Godzilla strategy and as such work on aligning already-existing entities is instrumental for AI alignment.
(On a side note, I suspect that there’s a lot of overlap between AI alignment and generalized alignment but that’s another argument entirely.)
My mentality as well. We can’t even get corporations to stop polluting. Probably whatever solves AI alignment will also help align our egregores and vice versa.