In §3.1–3.3, you look at the main known ways that altruism between humans has evolved — direct and indirect reciprocity, as well as kin and group selection[1] — and ask whether we expect such altruism from AI towards humans to be similarly adaptive.
Evolutionary psychology does not claim that observable human behavior is adaptive, but rather that it is produced by psychological mechanisms that are adaptations. The output of an adaptation need not be adaptive.
This is a subtle distinction which demands careful inspection.
In particular, are there circumstances under which AI training procedures and/or market or evolutionary incentives may produce psychological mechanisms which lead to altruistic behavior towards human beings, even when that altruistic behavior is not adaptive? For example, could altruism learned towards human beings early on, when humans have something to offer in return, be “sticky” later on (perhaps via durable, self-perpetuating power structures), when humans have nothing useful to offer? Or could learned altruism towards other AIs be broadly-scoped enough that it applies to humans as well, just as human altruistic tendencies sometimes apply to animal species which can offer us no plausible reciprocal gain? This latter case is analogous to the situation analyzed in your paper, and yet somehow a different result has (sometimes) occurred in reality than that predicted by your analysis.[2]
I don’t claim the conclusion is wrong, but I think a closer look at this subtlety would give the arguments for it more force.
Even factory farming, which might seem like a counterexample, is not really. For the very existence of humans altruistically motivated to eliminate it — and who have a real shot at success — demands explanation under your analysis.
In §3.1–3.3, you look at the main known ways that altruism between humans has evolved — direct and indirect reciprocity, as well as kin and group selection[1] — and ask whether we expect such altruism from AI towards humans to be similarly adaptive.
However, as observed in R. Joyce (2007). The Evolution of Morality (p. 5),
This is a subtle distinction which demands careful inspection.
In particular, are there circumstances under which AI training procedures and/or market or evolutionary incentives may produce psychological mechanisms which lead to altruistic behavior towards human beings, even when that altruistic behavior is not adaptive? For example, could altruism learned towards human beings early on, when humans have something to offer in return, be “sticky” later on (perhaps via durable, self-perpetuating power structures), when humans have nothing useful to offer? Or could learned altruism towards other AIs be broadly-scoped enough that it applies to humans as well, just as human altruistic tendencies sometimes apply to animal species which can offer us no plausible reciprocal gain? This latter case is analogous to the situation analyzed in your paper, and yet somehow a different result has (sometimes) occurred in reality than that predicted by your analysis.[2]
I don’t claim the conclusion is wrong, but I think a closer look at this subtlety would give the arguments for it more force.
Although you don’t look at network reciprocity / spatial selection.
Even factory farming, which might seem like a counterexample, is not really. For the very existence of humans altruistically motivated to eliminate it — and who have a real shot at success — demands explanation under your analysis.