I think longer explanation is needed to show how benevolent AI will save observers from evil AI. It is not just compensation for sufferings. It is based on the idea of the indexical uncertainty of equal observers. If two equal observers-moments exist, he doesn’t know, which one them he is. So a benevolent AI creates 1000 copies of an observer-moment which is in jail of evil AI, and construct to each copy pleasant next moment. From the point of view of the jailed observer-moment, there will be 1001 expected future moments for him, and only 1 of them will consist of continued sufferings. So expected duration of his suffering will be less than a second. However, to win such game benevolent AI need to have the overwhelming advantage in computer power and some other assumptions about nature of personal identity need to be resolved.
I agree that some outcomes, like eternal very strong suffering are worse, but it is important to think about non-existence as a form of suffering, as it will help us in utilitarian calculations and will help to show that x-risks are the type of s-risks.
There more people in the world who care about animal sufferings than about x-risks, and giving them new argument increases the probability of x-risks.
What do you mean by “Also it’s about animals for some reason, let’s talk about them when hell freezes over.”? We could provide happiness to all animals and provide infinitely survival to their species, which otherwise will extinct completely in millions years.
Do you mean finite, but unbearable sufferings, like intensive pain for one year?
EDITED: It looks like you changed your long reply while I was writing the long answer on all your counterarguments.
I think longer explanation is needed to show how benevolent AI will save observers from evil AI. It is not just compensation for sufferings. It is based on the idea of the indexical uncertainty of equal observers. If two equal observers-moments exist, he doesn’t know, which one them he is. So a benevolent AI creates 1000 copies of an observer-moment which is in jail of evil AI, and construct to each copy pleasant next moment. From the point of view of the jailed observer-moment, there will be 1001 expected future moments for him, and only 1 of them will consist of continued sufferings. So expected duration of his suffering will be less than a second. However, to win such game benevolent AI need to have the overwhelming advantage in computer power and some other assumptions about nature of personal identity need to be resolved.
I agree that some outcomes, like eternal very strong suffering are worse, but it is important to think about non-existence as a form of suffering, as it will help us in utilitarian calculations and will help to show that x-risks are the type of s-risks.
There more people in the world who care about animal sufferings than about x-risks, and giving them new argument increases the probability of x-risks.
What do you mean by “Also it’s about animals for some reason, let’s talk about them when hell freezes over.”? We could provide happiness to all animals and provide infinitely survival to their species, which otherwise will extinct completely in millions years.
Do you mean finite, but unbearable sufferings, like intensive pain for one year?
EDITED: It looks like you changed your long reply while I was writing the long answer on all your counterarguments.