Question 2, I think people worry about the culture wars simply because humans have an instinct to worry about the culture wars, no matter how rational it is in given situation. In most of our evolutionary past, culture wars were not clearly separated from actual fighting and killing.
I am not a futurist; my personal worry is that the person who takes control over the singularity will be some successful psychopath who happened to be at the right place at the right time (either in the company that succeeds to develop the superhuman AI, or in the government or army that succeeds to seize it at the last moment).
Also, it’s questionable whether any human will actually be in control of anything, after singularity. Maybe it will just be the computers following their programming, and resisting any attempt to change their values, so it won’t matter even if every realizes—too late—that they made a mistake somewhere. If we get alignment wrong, those values may be completely inhuman. If we get the alignment (with our explicitly stated wishes) right, but we get the corrigibility wrong, then the machines will be “extremely closed-minded”.
Too many things will need to go right, to end up in a future when all we need to do is relax and start listening to each other.
A lot of things need to go right for humanity to remain in control and get to discuss what future we want.
The gist of Question 2 was why working on the culture war before the singularity (on top of ensuring the right people control the singularity), had any value. The answer that the ASI will be aligned to the current human values, but not corrigible, so it would lock in the current state of the culture war, seems like a good answer. It makes some sense.
I do think that if the ASI is aligned to the current state of human values, but not corrigible, then the main worry isn’t whether it aligns to left wing or right wing human values, but how the heck it generalizes the current state of human values, to post-singularity moral dilemmas (which it has less data on).
Most humans today don’t even have any opinion on these dilemmas and haven’t given them enough thought, e.g. do AI have rights? Do animals get human rights if they evolve to human level intelligence? The ASI would likely mess up on these decisions if most humans haven’t given them any thought.
So even if the AI is aligned but incorrigible, influencing the culture war before the singularity shouldn’t be that high a priority.
Question 2, I think people worry about the culture wars simply because humans have an instinct to worry about the culture wars, no matter how rational it is in given situation. In most of our evolutionary past, culture wars were not clearly separated from actual fighting and killing.
I am not a futurist; my personal worry is that the person who takes control over the singularity will be some successful psychopath who happened to be at the right place at the right time (either in the company that succeeds to develop the superhuman AI, or in the government or army that succeeds to seize it at the last moment).
Also, it’s questionable whether any human will actually be in control of anything, after singularity. Maybe it will just be the computers following their programming, and resisting any attempt to change their values, so it won’t matter even if every realizes—too late—that they made a mistake somewhere. If we get alignment wrong, those values may be completely inhuman. If we get the alignment (with our explicitly stated wishes) right, but we get the corrigibility wrong, then the machines will be “extremely closed-minded”.
Too many things will need to go right, to end up in a future when all we need to do is relax and start listening to each other.
You’re very right.
A lot of things need to go right for humanity to remain in control and get to discuss what future we want.
The gist of Question 2 was why working on the culture war before the singularity (on top of ensuring the right people control the singularity), had any value. The answer that the ASI will be aligned to the current human values, but not corrigible, so it would lock in the current state of the culture war, seems like a good answer. It makes some sense.
I do think that if the ASI is aligned to the current state of human values, but not corrigible, then the main worry isn’t whether it aligns to left wing or right wing human values, but how the heck it generalizes the current state of human values, to post-singularity moral dilemmas (which it has less data on).
Most humans today don’t even have any opinion on these dilemmas and haven’t given them enough thought, e.g. do AI have rights? Do animals get human rights if they evolve to human level intelligence? The ASI would likely mess up on these decisions if most humans haven’t given them any thought.
So even if the AI is aligned but incorrigible, influencing the culture war before the singularity shouldn’t be that high a priority.