if the AI makes a habit of killing people for disagreeing, then we failed. If the AI enforces “western” anything in particular, then we failed. of course, you’re asserting that success is impossible, and that it’s not possible to find an intersection of human values across the globe that produces a significant increase in empirical cooperation; I don’t think I’m ready to give up yet. I do agree that conflict is currently fundamental to life, but I also don’t think we need to keep conflicting at whole-organism scale—most likely disagreements can be resolved at sub-organism scale, fighting over who gets to override whose preferences how much, and viciously minimize this kind of conflict. eg, if someone is a sadist, and desires the creation and preservation of conflicting situations for their own sake—then that person is someone who I personally would claim should be stopped by the global cooperation group.
Your phrasing sounds to me like you’d endorse a claim that might makes right, or that you feel others do so tit-for-tat requires that you endorse might makes right.
But based on results in various forms of game theory, especially evolutionary game theory, I expect generous-tit-for-tat-with-forgiveness is in fact able to win long term, and that we can end up with a very highly cooperative society that still ensures every being maintains tit-for-tat behavior. Ultimately the difficulty of reducing conflict boils down to reducing scarcity relative to current number of organisms, and I think if used well, AI can get us out of the energy-availability mess that society has gotten ourselves into.
You’ll have to look up the actual fields of research I’m referencing to get details; I make no claim to be an expert on the references I’m making, and of course as I’m hypothesizing out loud to your bitter commentary, I wouldn’t expect you to be impressed. But it’s the response that I have to give.
If the AI enforces “western” anything in particular, then we failed.
Now you made me afraid of the opposite failure mode: Imagine that an AI correctly calculates the coherently extrapolated volition of humankind, and then someone in the anti-bias department of Google checks it and says “nope, looks too western” and adds some manual overrides, based on their own idea of what non-western people actually want.
Viliam—this failure mode for AI is horrifyingly plausible, and all too likely.
We already see a strong increase in wokeness among AI researchers, e.g. the panic about ‘algorithmic bias’. If that trend continues, then any AI that looks aligned with some group’s ‘politically incorrect values’ might be considered entirely ‘unaligned’, taboo, and dangerous.
Then the fight over what counts as ‘aligned with humanity’ will boil down to a political fight over what counts as ‘aligned with elite/dominant/prestigious group X’s preferred political philosophy’.
I would note, since you use the word “woke”, that things typically considered woke to reason about—such as the rights of minorities—are in fact particularly important to get right. politically incorrect values are, in fact, often unfriendly to others; there’s a reason they don’t fare well politically. Generally, “western values” include things like coprotection, individual choice, and the consent of the governed—all very woke values. It’s important to be able to design AI that will protect every culture from every other culture, or we risk not merely continuation of unacceptable intercultural dominance, but the possibility that the ai turns out to be biased against all of humanity. nothing less than a solution to all coprotection will protect humanity from demise.
woke cannot be a buzzword that causes us to become silent about the things people are sometimes irrational about. they’re right that they’re important, just not always exactly right about what can be done to improve things. And importantly, there really is agentic pressure in the world to keep things in a bad situation. defect-heavy reproductive strategies require there to be people on a losing end.
It’s important to be able to design AI that will protect every culture from every other culture
This makes sense for the cultures that exist with the “consent of the governed”, but what about cultures such as Sparta or Aztecs? Should they also be protected from becoming more… like us? Is wanting to stop human sacrifices colonialism? (What about female genital mutilation?)
the individuals within each culture are themselves cultures that should be protected. bacteria are also cultures that should be protected. we need a universalized multi scale representation of culture, and we need to build the tools that allow all culture to negotiate peace with all other culture. if that means some large cultures, eg large groups, need to protect tiny cultures, eg individuals, from medium size cultures—then it means negotiation with the medium sized culture is still needed. we need to be able to identify and describe the smallest edit to a culture that preserves and increases cross-culture friendliness, while also preserving the distinct cultures as their own beings.
as a group of cultures, we do have a responsibility to do intercultural demands of non-violation—but we can do this in a way that minimizes value drift about self. it’s just a question of ensuring that subcultures that a larger culture wants to reject get to exist in their own form.
culture used here to mean “self preserving information process”, ie, all forms of life.
yeah that would also be failure, and is approximately the same thing as the worry I was replying to. I don’t know which is more likely—Google is a high-class-Indian company at this point so that direction seems more likely, but either outcome is a bad approximation of what makes the world better.
if the AI makes a habit of killing people for disagreeing, then we failed. If the AI enforces “western” anything in particular, then we failed. of course, you’re asserting that success is impossible, and that it’s not possible to find an intersection of human values across the globe that produces a significant increase in empirical cooperation; I don’t think I’m ready to give up yet. I do agree that conflict is currently fundamental to life, but I also don’t think we need to keep conflicting at whole-organism scale—most likely disagreements can be resolved at sub-organism scale, fighting over who gets to override whose preferences how much, and viciously minimize this kind of conflict. eg, if someone is a sadist, and desires the creation and preservation of conflicting situations for their own sake—then that person is someone who I personally would claim should be stopped by the global cooperation group.
Your phrasing sounds to me like you’d endorse a claim that might makes right, or that you feel others do so tit-for-tat requires that you endorse might makes right.
But based on results in various forms of game theory, especially evolutionary game theory, I expect generous-tit-for-tat-with-forgiveness is in fact able to win long term, and that we can end up with a very highly cooperative society that still ensures every being maintains tit-for-tat behavior. Ultimately the difficulty of reducing conflict boils down to reducing scarcity relative to current number of organisms, and I think if used well, AI can get us out of the energy-availability mess that society has gotten ourselves into.
You’ll have to look up the actual fields of research I’m referencing to get details; I make no claim to be an expert on the references I’m making, and of course as I’m hypothesizing out loud to your bitter commentary, I wouldn’t expect you to be impressed. But it’s the response that I have to give.
Now you made me afraid of the opposite failure mode: Imagine that an AI correctly calculates the coherently extrapolated volition of humankind, and then someone in the anti-bias department of Google checks it and says “nope, looks too western” and adds some manual overrides, based on their own idea of what non-western people actually want.
Viliam—this failure mode for AI is horrifyingly plausible, and all too likely.
We already see a strong increase in wokeness among AI researchers, e.g. the panic about ‘algorithmic bias’. If that trend continues, then any AI that looks aligned with some group’s ‘politically incorrect values’ might be considered entirely ‘unaligned’, taboo, and dangerous.
Then the fight over what counts as ‘aligned with humanity’ will boil down to a political fight over what counts as ‘aligned with elite/dominant/prestigious group X’s preferred political philosophy’.
I would note, since you use the word “woke”, that things typically considered woke to reason about—such as the rights of minorities—are in fact particularly important to get right. politically incorrect values are, in fact, often unfriendly to others; there’s a reason they don’t fare well politically. Generally, “western values” include things like coprotection, individual choice, and the consent of the governed—all very woke values. It’s important to be able to design AI that will protect every culture from every other culture, or we risk not merely continuation of unacceptable intercultural dominance, but the possibility that the ai turns out to be biased against all of humanity. nothing less than a solution to all coprotection will protect humanity from demise.
woke cannot be a buzzword that causes us to become silent about the things people are sometimes irrational about. they’re right that they’re important, just not always exactly right about what can be done to improve things. And importantly, there really is agentic pressure in the world to keep things in a bad situation. defect-heavy reproductive strategies require there to be people on a losing end.
This makes sense for the cultures that exist with the “consent of the governed”, but what about cultures such as Sparta or Aztecs? Should they also be protected from becoming more… like us? Is wanting to stop human sacrifices colonialism? (What about female genital mutilation?)
the individuals within each culture are themselves cultures that should be protected. bacteria are also cultures that should be protected. we need a universalized multi scale representation of culture, and we need to build the tools that allow all culture to negotiate peace with all other culture. if that means some large cultures, eg large groups, need to protect tiny cultures, eg individuals, from medium size cultures—then it means negotiation with the medium sized culture is still needed. we need to be able to identify and describe the smallest edit to a culture that preserves and increases cross-culture friendliness, while also preserving the distinct cultures as their own beings.
as a group of cultures, we do have a responsibility to do intercultural demands of non-violation—but we can do this in a way that minimizes value drift about self. it’s just a question of ensuring that subcultures that a larger culture wants to reject get to exist in their own form.
culture used here to mean “self preserving information process”, ie, all forms of life.
yeah that would also be failure, and is approximately the same thing as the worry I was replying to. I don’t know which is more likely—Google is a high-class-Indian company at this point so that direction seems more likely, but either outcome is a bad approximation of what makes the world better.