In one sense, I no longer endorse the previous comment. In another sense, I sort of endorse the previous comment.
I was basically wrong about alignment requiring human values to be game-theoretically optimal, and I think that cooperation is actually doable without relying on game theory tools like enforcement, because the situation with human alignment is very different than the situation with AI alignment, because we have access to the AI’s brain and can directly reward good things and negatively reward bad things, combined with the fact that we have a very powerful optimizer called SGD that lets us straightforwardly select over minds and directly edit the AI’s brain, which aren’t things we have to align humans, partially for ethical reasons and partially for technological reasons.
I also think my analogy of human-animal alignment is actually almost as bad as human-evolution alignment, which is worthless, and instead the better analogy for how to predict the likelihood of AI alignment is prefrontal cortex-survival value alignment, or innate reward alignment, which is very impressive alignment.
However, I do think that even with that assumption of aligned AIs, I do think that democracy is likely to decay pretty quickly under AI, especially because of the likely power imbalances, especially hundreds of years into the future. We will likely retain some freedoms under aligned AI rule, but I expect it to be a lot less than what we’re used to today, and it will transition into a form of benevolent totalitarianism.
In one sense, I no longer endorse the previous comment. In another sense, I sort of endorse the previous comment.
I was basically wrong about alignment requiring human values to be game-theoretically optimal, and I think that cooperation is actually doable without relying on game theory tools like enforcement, because the situation with human alignment is very different than the situation with AI alignment, because we have access to the AI’s brain and can directly reward good things and negatively reward bad things, combined with the fact that we have a very powerful optimizer called SGD that lets us straightforwardly select over minds and directly edit the AI’s brain, which aren’t things we have to align humans, partially for ethical reasons and partially for technological reasons.
I also think my analogy of human-animal alignment is actually almost as bad as human-evolution alignment, which is worthless, and instead the better analogy for how to predict the likelihood of AI alignment is prefrontal cortex-survival value alignment, or innate reward alignment, which is very impressive alignment.
However, I do think that even with that assumption of aligned AIs, I do think that democracy is likely to decay pretty quickly under AI, especially because of the likely power imbalances, especially hundreds of years into the future. We will likely retain some freedoms under aligned AI rule, but I expect it to be a lot less than what we’re used to today, and it will transition into a form of benevolent totalitarianism.