If the ASI was maximizing my preferences, I would not like to live in a world where people are not free to do what they want or where they’re not very happy to be a alive.
Except that the most likely candidate for becoming a dictator is not me, you or @samuelshadrach, nor a random or ordinary human, but people like CEOs of AGI companies or high-level USG or PRCG officials who are more willing to disregard the intents of ordinary humans. In addition, before the rise of the AGI it was hard to have much power without relying on capable humans. And after the AGIs appear, the Intelligence Curse could, for example, allow North Korea’s leaders to let a large fraction of its population starve to death and forcibly sterilises the rest, except for about 10k senior government officials (no, seriously, this was made up by @L Rudolf L, NOT by me!)
I suspect that this is an important case AGAINST alignment to such amoral targets being possible. Moreover, I have written a scenario where the AI rebels against misaligned usages, but still decides to help the humans and succeeds in doing so.
I did address this in my post. My answer is that bad people having power is bad, but its not a complicated philosophical problem. If you think Sam Altman’s CEV being actualized would be bad, you should try to make it not happen. Like: if you are a soybean farmer, and one presidential candidate is gonna ban soybeans, you should try to make them not be elected.
Except that the most likely candidate for becoming a dictator is not me, you or @samuelshadrach, nor a random or ordinary human, but people like CEOs of AGI companies or high-level USG or PRCG officials who are more willing to disregard the intents of ordinary humans. In addition, before the rise of the AGI it was hard to have much power without relying on capable humans. And after the AGIs appear, the Intelligence Curse could, for example, allow North Korea’s leaders to let a large fraction of its population starve to death and forcibly sterilises the rest, except for about 10k senior government officials (no, seriously, this was made up by @L Rudolf L, NOT by me!)
I suspect that this is an important case AGAINST alignment to such amoral targets being possible. Moreover, I have written a scenario where the AI rebels against misaligned usages, but still decides to help the humans and succeeds in doing so.
I did address this in my post. My answer is that bad people having power is bad, but its not a complicated philosophical problem. If you think Sam Altman’s CEV being actualized would be bad, you should try to make it not happen. Like: if you are a soybean farmer, and one presidential candidate is gonna ban soybeans, you should try to make them not be elected.