I agree that preventing s-risks is important, but I will try to look on possible counter arguments:
Benevolent AI will able to fight acasual war against evil AI in the another branch of the multiverse by creating more my happy copies, or more paths from suffering observer-moment to happy observer-moment. So creating benevolent superintelligence will help against suffering everywhere in the multiverse.
Non-existence is the worst form of suffering if we define suffering as action against our most important value. Thus x-risks are s-risks. Pain is not always suffering, as masochists exist.
If we value too much attention to animal suffering, we give ground to projects like Voluntary human extinction movement. So we increase chances of human extinction, as humans created animal farms. Moreover, if we agree that non-existence is not suffering, we could kill all life on earth and stop all sufferings—which is not right.
Benevolent AI will able to resurrect all possible sentient beings and animals and provide them infinite paradise thus compensating any current suffering of animals.
Only infinite and unbearable suffering are bad. We should distinguish unbearable sufferings like agony, and ordinary sufferings which just reinforcement learning signals for wetware of our brain and inform us about the past wrong decisions or the need to call a doctor.
I think longer explanation is needed to show how benevolent AI will save observers from evil AI. It is not just compensation for sufferings. It is based on the idea of the indexical uncertainty of equal observers. If two equal observers-moments exist, he doesn’t know, which one them he is. So a benevolent AI creates 1000 copies of an observer-moment which is in jail of evil AI, and construct to each copy pleasant next moment. From the point of view of the jailed observer-moment, there will be 1001 expected future moments for him, and only 1 of them will consist of continued sufferings. So expected duration of his suffering will be less than a second. However, to win such game benevolent AI need to have the overwhelming advantage in computer power and some other assumptions about nature of personal identity need to be resolved.
I agree that some outcomes, like eternal very strong suffering are worse, but it is important to think about non-existence as a form of suffering, as it will help us in utilitarian calculations and will help to show that x-risks are the type of s-risks.
There more people in the world who care about animal sufferings than about x-risks, and giving them new argument increases the probability of x-risks.
What do you mean by “Also it’s about animals for some reason, let’s talk about them when hell freezes over.”? We could provide happiness to all animals and provide infinitely survival to their species, which otherwise will extinct completely in millions years.
Do you mean finite, but unbearable sufferings, like intensive pain for one year?
EDITED: It looks like you changed your long reply while I was writing the long answer on all your counterarguments.
I agree that preventing s-risks is important, but I will try to look on possible counter arguments:
Benevolent AI will able to fight acasual war against evil AI in the another branch of the multiverse by creating more my happy copies, or more paths from suffering observer-moment to happy observer-moment. So creating benevolent superintelligence will help against suffering everywhere in the multiverse.
Non-existence is the worst form of suffering if we define suffering as action against our most important value. Thus x-risks are s-risks. Pain is not always suffering, as masochists exist.
If we value too much attention to animal suffering, we give ground to projects like Voluntary human extinction movement. So we increase chances of human extinction, as humans created animal farms. Moreover, if we agree that non-existence is not suffering, we could kill all life on earth and stop all sufferings—which is not right.
Benevolent AI will able to resurrect all possible sentient beings and animals and provide them infinite paradise thus compensating any current suffering of animals.
Only infinite and unbearable suffering are bad. We should distinguish unbearable sufferings like agony, and ordinary sufferings which just reinforcement learning signals for wetware of our brain and inform us about the past wrong decisions or the need to call a doctor.
I think all of these are quite unconvincing and the argument stays intact, but thanks for coming up with them.
I think longer explanation is needed to show how benevolent AI will save observers from evil AI. It is not just compensation for sufferings. It is based on the idea of the indexical uncertainty of equal observers. If two equal observers-moments exist, he doesn’t know, which one them he is. So a benevolent AI creates 1000 copies of an observer-moment which is in jail of evil AI, and construct to each copy pleasant next moment. From the point of view of the jailed observer-moment, there will be 1001 expected future moments for him, and only 1 of them will consist of continued sufferings. So expected duration of his suffering will be less than a second. However, to win such game benevolent AI need to have the overwhelming advantage in computer power and some other assumptions about nature of personal identity need to be resolved.
I agree that some outcomes, like eternal very strong suffering are worse, but it is important to think about non-existence as a form of suffering, as it will help us in utilitarian calculations and will help to show that x-risks are the type of s-risks.
There more people in the world who care about animal sufferings than about x-risks, and giving them new argument increases the probability of x-risks.
What do you mean by “Also it’s about animals for some reason, let’s talk about them when hell freezes over.”? We could provide happiness to all animals and provide infinitely survival to their species, which otherwise will extinct completely in millions years.
Do you mean finite, but unbearable sufferings, like intensive pain for one year?
EDITED: It looks like you changed your long reply while I was writing the long answer on all your counterarguments.