A lot of things need to go right for humanity to remain in control and get to discuss what future we want.
The gist of Question 2 was why working on the culture war before the singularity (on top of ensuring the right people control the singularity), had any value. The answer that the ASI will be aligned to the current human values, but not corrigible, so it would lock in the current state of the culture war, seems like a good answer. It makes some sense.
I do think that if the ASI is aligned to the current state of human values, but not corrigible, then the main worry isn’t whether it aligns to left wing or right wing human values, but how the heck it generalizes the current state of human values, to post-singularity moral dilemmas (which it has less data on).
Most humans today don’t even have any opinion on these dilemmas and haven’t given them enough thought, e.g. do AI have rights? Do animals get human rights if they evolve to human level intelligence? The ASI would likely mess up on these decisions if most humans haven’t given them any thought.
So even if the AI is aligned but incorrigible, influencing the culture war before the singularity shouldn’t be that high a priority.
You’re very right.
A lot of things need to go right for humanity to remain in control and get to discuss what future we want.
The gist of Question 2 was why working on the culture war before the singularity (on top of ensuring the right people control the singularity), had any value. The answer that the ASI will be aligned to the current human values, but not corrigible, so it would lock in the current state of the culture war, seems like a good answer. It makes some sense.
I do think that if the ASI is aligned to the current state of human values, but not corrigible, then the main worry isn’t whether it aligns to left wing or right wing human values, but how the heck it generalizes the current state of human values, to post-singularity moral dilemmas (which it has less data on).
Most humans today don’t even have any opinion on these dilemmas and haven’t given them enough thought, e.g. do AI have rights? Do animals get human rights if they evolve to human level intelligence? The ASI would likely mess up on these decisions if most humans haven’t given them any thought.
So even if the AI is aligned but incorrigible, influencing the culture war before the singularity shouldn’t be that high a priority.