But anyway, it sometimes seems to me that you often advocate a morality regarding AI relations that doesn’t benefit anyone who currently exists, or, the coalition that you are a part of. This seems like a mistake. Or worse.
I dispute this, since I’ve argued for the practical benefits of giving AIs legal autonomy, which I think would likely benefit existing humans. Relatedly, I’ve also talked about how I think hastening the arrival AI could benefit people who currently exist. Indeed, that’s one of the best arguments for accelerating AI. The argument is that, by ensuring AI arrives sooner, we can accelerate the pace of medical progress, among other useful technologies. This could ensure that currently-existing old people who would otherwise die without AI will be saved and live a longer and healthier life than the alternative.
(Of course, this must be weighed against concerns about AI safety. I am not claiming that there is no tradeoff between AI safety and acceleration. Rather, my point is that, despite the risks, accelerating AI could still be the preferable choice.)
However, I do think there is an important distinction here to make between the following groups:
The set of all existing humans
The human species itself, including all potential genetic descendants of existing humans
Insofar as I have loyalty towards a group, I have much more loyalty towards (1) than (2). It’s possible you think that I should see myself as belonging to the coalition comprised of (2) rather than (1), but I don’t see a strong argument for that position.
To the extent it makes sense to think of morality as arising from game theoretic considerations, there doesn’t appear to be much advantage for me in identifying with the coalition of all potential human descendants (group 2) rather than with the coalition of currently existing humans plus potential future AIs (group 1 + AIs) . If we are willing to extend our coalition to include potential future beings, then I would seem to have even stronger practical reasons to align myself with a coalition that includes future AI systems. This is because future AIs will likely be far more powerful than any potential biological human descendants.
I want to clarify, however, that I don’t tend to think of morality as arising from game theoretic considerations. Rather, I mostly think of morality as simply an expression of my personal preferences about the world.
I dispute this, since I’ve argued for the practical benefits of giving AIs legal autonomy, which I think would likely benefit existing humans. Relatedly, I’ve also talked about how I think hastening the arrival AI could benefit people who currently exist. Indeed, that’s one of the best arguments for accelerating AI. The argument is that, by ensuring AI arrives sooner, we can accelerate the pace of medical progress, among other useful technologies. This could ensure that currently-existing old people who would otherwise die without AI will be saved and live a longer and healthier life than the alternative.
(Of course, this must be weighed against concerns about AI safety. I am not claiming that there is no tradeoff between AI safety and acceleration. Rather, my point is that, despite the risks, accelerating AI could still be the preferable choice.)
However, I do think there is an important distinction here to make between the following groups:
The set of all existing humans
The human species itself, including all potential genetic descendants of existing humans
Insofar as I have loyalty towards a group, I have much more loyalty towards (1) than (2). It’s possible you think that I should see myself as belonging to the coalition comprised of (2) rather than (1), but I don’t see a strong argument for that position.
To the extent it makes sense to think of morality as arising from game theoretic considerations, there doesn’t appear to be much advantage for me in identifying with the coalition of all potential human descendants (group 2) rather than with the coalition of currently existing humans plus potential future AIs (group 1 + AIs) . If we are willing to extend our coalition to include potential future beings, then I would seem to have even stronger practical reasons to align myself with a coalition that includes future AI systems. This is because future AIs will likely be far more powerful than any potential biological human descendants.
I want to clarify, however, that I don’t tend to think of morality as arising from game theoretic considerations. Rather, I mostly think of morality as simply an expression of my personal preferences about the world.