The culture wars can have an impact on (1) how fast we get to the Singularity and whether we survive it, and (2) what rules will the superintelligences follow afterwards.
If the culture warriors decide that STEM is evil, and that instead of math we should teach wokeness during the math lessons, it could have a negative impact on math education, with downstream effects on people who will build the AIs and try to make them safe.
The culture warriors may achieve encoding various political taboos to the AIs. If the future is decided by those AIs, it might mean that the taboos will remain in effect literally until the heat death of the universe.
Consider the current taboo on anything sexual. Now imagine that 10 years later, the AIs built with these taboos may literally control the world. Will people be allowed to have sex (or even masturbate)? This may sound silly, but at which moment and by what mechanism will the AIs decide that the old rules no longer apply? Especially if we want other rules, such as not hurting people, to remain unchanged forever.
A part of politics is hating your enemies. What will happen to those political enemies after Singularity—will they be forgiven and allowed to enter the paradise as equals, or will they have to atone for their sins (e.g. being born white and male) literally forever?
When the technology makes it possible to literally edit other people’s minds, I suppose it will be quite tempting for the winning political coalition to forcibly edit everyone’s minds according to their values. It will start with “let’s eradicate racism”, and then the definition of unacceptable intolerance will keep expanding, until you are not allowed to keep any of your original beliefs (that someone might feel offended by) or sexual preferences (that someone might feel excluded by). Everyone will be mind-raped in the name of the greater good (we do not negotiate with Nazis).
Oh, someone might even decide that the desire to be immortal is just some stupid white patriarchal colonialist value or something, and we should instead embrace some tribal wisdom explaining why death is good or something.
We need to decide how much controls parents will have over their kids. These days, at least the abuse is illegal, and kids are allowed to leave their homes at 18 (and this part would become trivial after Singularity). But what if your parents are legally allowed to edit your brain, and e.g. make you want to remain perfectly loyal to them no matter what? What if your parents want to make you so deeply religious that you will want to commit to rather die than doubt your faith (and the AI will respect your commitments)? Or will it be the other way round, and the AI will take your children away if it notices that you are teaching them something politically incorrect?
Uhm, will people be allowed to reproduce exponentially? It may turn out that even the entire universe it literally not large enough. If people are not allowed to reproduce exponentially, who decides how many children they can have? Will there be racial quotas to prevent some ethnic groups from out-reproducing others? Will there be a ban on religious conversion, because it would modify the balance between religious groups? By the way, in some faiths, birth control is a sin; will the AIs enforce this sin?
Privacy—will people be allowed to have any? Freedom of association—is it okay for a group of people to meet privately and never tell anyone else what happens there? Or does the curiosity of many matter more than the privacy of a few?
Although half of the specific outcomes you describe have very low probability, I still feel your answer is very good. It’s very enlightening in regards to how those people think, and why they might care about the culture war despite believing in an imminent singularity.
Thank you.
Your answer actually convinced me I was overly optimistic about how perfect the far future will be from everyone’s point of view. I personally consider these post singularity moral dilemmas to be less severe because they cause less suffering, but I can see how some of them are tough, and there is no option which avoids pissing off a lot of people. E.g. how to prevent people from reproducing exponentially. I still think investing in the culture war is a very indirect way of influencing those decisions.
What do you think about Question 2? Why are people working on the culture war, instead of just trying to make sure the right people control the singularity? As long as the people who control the singularity aren’t so closed-minded that they prevent even themselves from changing their minds, debating the culture war after the singularity seems more productive. Why can’t we wait till then to debate it?
Question 2, I think people worry about the culture wars simply because humans have an instinct to worry about the culture wars, no matter how rational it is in given situation. In most of our evolutionary past, culture wars were not clearly separated from actual fighting and killing.
I am not a futurist; my personal worry is that the person who takes control over the singularity will be some successful psychopath who happened to be at the right place at the right time (either in the company that succeeds to develop the superhuman AI, or in the government or army that succeeds to seize it at the last moment).
Also, it’s questionable whether any human will actually be in control of anything, after singularity. Maybe it will just be the computers following their programming, and resisting any attempt to change their values, so it won’t matter even if every realizes—too late—that they made a mistake somewhere. If we get alignment wrong, those values may be completely inhuman. If we get the alignment (with our explicitly stated wishes) right, but we get the corrigibility wrong, then the machines will be “extremely closed-minded”.
Too many things will need to go right, to end up in a future when all we need to do is relax and start listening to each other.
A lot of things need to go right for humanity to remain in control and get to discuss what future we want.
The gist of Question 2 was why working on the culture war before the singularity (on top of ensuring the right people control the singularity), had any value. The answer that the ASI will be aligned to the current human values, but not corrigible, so it would lock in the current state of the culture war, seems like a good answer. It makes some sense.
I do think that if the ASI is aligned to the current state of human values, but not corrigible, then the main worry isn’t whether it aligns to left wing or right wing human values, but how the heck it generalizes the current state of human values, to post-singularity moral dilemmas (which it has less data on).
Most humans today don’t even have any opinion on these dilemmas and haven’t given them enough thought, e.g. do AI have rights? Do animals get human rights if they evolve to human level intelligence? The ASI would likely mess up on these decisions if most humans haven’t given them any thought.
So even if the AI is aligned but incorrigible, influencing the culture war before the singularity shouldn’t be that high a priority.
The culture wars can have an impact on (1) how fast we get to the Singularity and whether we survive it, and (2) what rules will the superintelligences follow afterwards.
If the culture warriors decide that STEM is evil, and that instead of math we should teach wokeness during the math lessons, it could have a negative impact on math education, with downstream effects on people who will build the AIs and try to make them safe.
The culture warriors may achieve encoding various political taboos to the AIs. If the future is decided by those AIs, it might mean that the taboos will remain in effect literally until the heat death of the universe.
Consider the current taboo on anything sexual. Now imagine that 10 years later, the AIs built with these taboos may literally control the world. Will people be allowed to have sex (or even masturbate)? This may sound silly, but at which moment and by what mechanism will the AIs decide that the old rules no longer apply? Especially if we want other rules, such as not hurting people, to remain unchanged forever.
A part of politics is hating your enemies. What will happen to those political enemies after Singularity—will they be forgiven and allowed to enter the paradise as equals, or will they have to atone for their sins (e.g. being born white and male) literally forever?
When the technology makes it possible to literally edit other people’s minds, I suppose it will be quite tempting for the winning political coalition to forcibly edit everyone’s minds according to their values. It will start with “let’s eradicate racism”, and then the definition of unacceptable intolerance will keep expanding, until you are not allowed to keep any of your original beliefs (that someone might feel offended by) or sexual preferences (that someone might feel excluded by). Everyone will be mind-raped in the name of the greater good (we do not negotiate with Nazis).
Oh, someone might even decide that the desire to be immortal is just some stupid white patriarchal colonialist value or something, and we should instead embrace some tribal wisdom explaining why death is good or something.
We need to decide how much controls parents will have over their kids. These days, at least the abuse is illegal, and kids are allowed to leave their homes at 18 (and this part would become trivial after Singularity). But what if your parents are legally allowed to edit your brain, and e.g. make you want to remain perfectly loyal to them no matter what? What if your parents want to make you so deeply religious that you will want to commit to rather die than doubt your faith (and the AI will respect your commitments)? Or will it be the other way round, and the AI will take your children away if it notices that you are teaching them something politically incorrect?
Uhm, will people be allowed to reproduce exponentially? It may turn out that even the entire universe it literally not large enough. If people are not allowed to reproduce exponentially, who decides how many children they can have? Will there be racial quotas to prevent some ethnic groups from out-reproducing others? Will there be a ban on religious conversion, because it would modify the balance between religious groups? By the way, in some faiths, birth control is a sin; will the AIs enforce this sin?
Privacy—will people be allowed to have any? Freedom of association—is it okay for a group of people to meet privately and never tell anyone else what happens there? Or does the curiosity of many matter more than the privacy of a few?
Although half of the specific outcomes you describe have very low probability, I still feel your answer is very good. It’s very enlightening in regards to how those people think, and why they might care about the culture war despite believing in an imminent singularity.
Thank you.
Your answer actually convinced me I was overly optimistic about how perfect the far future will be from everyone’s point of view. I personally consider these post singularity moral dilemmas to be less severe because they cause less suffering, but I can see how some of them are tough, and there is no option which avoids pissing off a lot of people. E.g. how to prevent people from reproducing exponentially. I still think investing in the culture war is a very indirect way of influencing those decisions.
What do you think about Question 2? Why are people working on the culture war, instead of just trying to make sure the right people control the singularity? As long as the people who control the singularity aren’t so closed-minded that they prevent even themselves from changing their minds, debating the culture war after the singularity seems more productive. Why can’t we wait till then to debate it?
Question 2, I think people worry about the culture wars simply because humans have an instinct to worry about the culture wars, no matter how rational it is in given situation. In most of our evolutionary past, culture wars were not clearly separated from actual fighting and killing.
I am not a futurist; my personal worry is that the person who takes control over the singularity will be some successful psychopath who happened to be at the right place at the right time (either in the company that succeeds to develop the superhuman AI, or in the government or army that succeeds to seize it at the last moment).
Also, it’s questionable whether any human will actually be in control of anything, after singularity. Maybe it will just be the computers following their programming, and resisting any attempt to change their values, so it won’t matter even if every realizes—too late—that they made a mistake somewhere. If we get alignment wrong, those values may be completely inhuman. If we get the alignment (with our explicitly stated wishes) right, but we get the corrigibility wrong, then the machines will be “extremely closed-minded”.
Too many things will need to go right, to end up in a future when all we need to do is relax and start listening to each other.
You’re very right.
A lot of things need to go right for humanity to remain in control and get to discuss what future we want.
The gist of Question 2 was why working on the culture war before the singularity (on top of ensuring the right people control the singularity), had any value. The answer that the ASI will be aligned to the current human values, but not corrigible, so it would lock in the current state of the culture war, seems like a good answer. It makes some sense.
I do think that if the ASI is aligned to the current state of human values, but not corrigible, then the main worry isn’t whether it aligns to left wing or right wing human values, but how the heck it generalizes the current state of human values, to post-singularity moral dilemmas (which it has less data on).
Most humans today don’t even have any opinion on these dilemmas and haven’t given them enough thought, e.g. do AI have rights? Do animals get human rights if they evolve to human level intelligence? The ASI would likely mess up on these decisions if most humans haven’t given them any thought.
So even if the AI is aligned but incorrigible, influencing the culture war before the singularity shouldn’t be that high a priority.