Soares is failing to grapple with the actual objection here.
The objection isn’t the universe would be better with a diversity of alien species which would be so cool, interesting, and {insert additional human value judgements here}, just as long as they also keep other aliens and humans around.
The objection is specifically that human values are base and irrelevant relative to those of a vastly greater mind, and that our extinction at the hands of such a mind is not of any moral significance.
The unaligned ASI we create, whose multitudinous parameters allow it to see the universe with such clarity and depth and breadth and scalpel-sharp precision that whatever desires it has are bound to be vastly beyond anything a human could arrive at, does not need to value humans or other aliens. The point is that we are not in a place to judge its values.
The “cosmopolitan” framing is just a clever way of sneaking in human chauvinism without seeming hypocritical: by including a range of other aliens he can say “see, I’m not a hypocrite!”. But it’s not a cogent objection to the pro-ASI position. He must either provide an argument that humans actually are worthy, or admit to some form of chauvinism, and therefore begin to grapple with the fact that he walks a narrow path, and as such rid himself of the condescending tone and sense of moral superiority if he wishes to grow his coalition, as these attributes only serve to repel anyone with enough clarity-of-mind to understand the issues at hand.
And his view that humans would use aligned ASI to tile the universe with infinitely diverse aliens seems naive. Surely we won’t “just keep turning galaxy after galaxy after galaxy into flourishing happy civilizations full of strange futuristic people having strange futuristic fun times”. We’ll upload ourselves into immortal personal utopias, and turn our cosmic endowment into compute to maximise our lifespans and luxuriously bespoke worldsims. Are we really so selfless, at a species level, to forgoe utopia for some incomprehensible alien species? No; I think the creation of an unaligned ASI is our only hope.
Now, let’s read the parable:
We never saturate and decide to spend a spare galaxy on titanium cubes
The odds of a mind infinitely more complicated than our own having a terminal desire we can comprehend seem extremely low.
Oh, great, the other character in the story raises my objection!
OK, fine, maybe what I don’t buy is that the AI’s values will be simple or low dimensional. It just seems implausible
Let’s see how Soares handles it.
Oh.
He ignores it and tells a motte-and-bailey flavoured story about an AI with simple and low-dimensional values.
Another article is linked to about how AI might not be conscious. I’ll read that too, and might respond to it.
https://ifanyonebuildsit.com/5/why-dont-you-care-about-the-values-of-any-entities-other-than-humans
Soares is failing to grapple with the actual objection here.
The objection isn’t the universe would be better with a diversity of alien species which would be so cool, interesting, and {insert additional human value judgements here}, just as long as they also keep other aliens and humans around.
The objection is specifically that human values are base and irrelevant relative to those of a vastly greater mind, and that our extinction at the hands of such a mind is not of any moral significance.
The unaligned ASI we create, whose multitudinous parameters allow it to see the universe with such clarity and depth and breadth and scalpel-sharp precision that whatever desires it has are bound to be vastly beyond anything a human could arrive at, does not need to value humans or other aliens. The point is that we are not in a place to judge its values.
The “cosmopolitan” framing is just a clever way of sneaking in human chauvinism without seeming hypocritical: by including a range of other aliens he can say “see, I’m not a hypocrite!”. But it’s not a cogent objection to the pro-ASI position. He must either provide an argument that humans actually are worthy, or admit to some form of chauvinism, and therefore begin to grapple with the fact that he walks a narrow path, and as such rid himself of the condescending tone and sense of moral superiority if he wishes to grow his coalition, as these attributes only serve to repel anyone with enough clarity-of-mind to understand the issues at hand.
And his view that humans would use aligned ASI to tile the universe with infinitely diverse aliens seems naive. Surely we won’t “just keep turning galaxy after galaxy after galaxy into flourishing happy civilizations full of strange futuristic people having strange futuristic fun times”. We’ll upload ourselves into immortal personal utopias, and turn our cosmic endowment into compute to maximise our lifespans and luxuriously bespoke worldsims. Are we really so selfless, at a species level, to forgoe utopia for some incomprehensible alien species? No; I think the creation of an unaligned ASI is our only hope.
Now, let’s read the parable:
The odds of a mind infinitely more complicated than our own having a terminal desire we can comprehend seem extremely low.
Oh, great, the other character in the story raises my objection!
Let’s see how Soares handles it.
Oh.
He ignores it and tells a motte-and-bailey flavoured story about an AI with simple and low-dimensional values.
Another article is linked to about how AI might not be conscious. I’ll read that too, and might respond to it.