I don’t know how you “solve inner alignment” without making it so that any sufficiently powerful organisation can have an AI of whatever level we’ve solved that for that is fully aligned with its interests, and nearly all powerful organisations are Moloch. The AI does not itself need to ruthlessly optimise for something opposed to human interests if it is fully aligned with an entity that will do that for it.
The AI corporation does not hate you, nor does it love you, but you are made out of atoms which it can use for something else.
I don’t know how you “solve inner alignment” without making it so that any sufficiently powerful organisation can have an AI of whatever level we’ve solved that for that is fully aligned with its interests, and nearly all powerful organisations are Moloch. The AI does not itself need to ruthlessly optimise for something opposed to human interests if it is fully aligned with an entity that will do that for it.
The
AIcorporation does not hate you, nor does it love you, but you are made out of atoms which it can use for something else.