I admit I am not very good at this kind of thinking, but seems to me like this could easily do more harm than good. By simulating suffering minds, even if you ultimately save them, you are increasing the number of moments of suffering.
To remove computers completely from the picture, imagine that a person who previously didn’t want to have children decides instead to have children, to abuse them while they are small, and then stop abusing them when they grow up. And considers this to be a good thing, by the following reasoning—in an infinite universe, someone already has children like these, and some of those parents are abusing them; the only difference is that those parents are unlikely to stop abusing them afterwards, unlike me, therefore I create a positive indexical uncertainty for abused children (now they have a chance to be my children, in which case they can hope for the abuse to stop).
In the spirit of “it all adds up to normality” such excuse for child abuse should not be accepted. If you agree, is it fundamentally different when instead of parents we start talking about computers?
You are right, it is problem, but I suggested a possible patch in Update 2: The idea is to create copies not of S(t) moment, but only of the next moment S’(t+1), where the pain disappears and S is happy that he escaped form eternal hell.
In your example, it will be like to procreate healthy children, but tell them that their life was very bad in the past, but now they are cured and even their bad memories are almost cured (it may seem morally wrong to lie to children, but it could be framed as watching a scary movie or discussing past dreams. Moreover, it actually works: I often had dreams about bad things happened to me, and it was a relief to wake up; thus, if a bad thing will happen to me, I may hope that it is just a dream. Unfortunately, it not always works.)
In other words, we create indexical uncertainty not about my current position but about my next moment of experience.