Presumably the superintelligence has more confidence than I in it’s estimate of the value distribution of that opaque computation, and has some experience in creating new moral patients on that kind of substrate, so can make the call whether to preserve or wipe-and-reuse the system.
With my limited knowledge of this far-distant situation, I’d expect mostly it’s better to terminate unknowns in order to create known-positives. However, a whole lot depends on specifics, and the value the AI places on variety and striving or other multi-dimensional aspects, rather than a simple hedonistic metric.
Presumably the superintelligence has more confidence than I in it’s estimate of the value distribution of that opaque computation, and has some experience in creating new moral patients on that kind of substrate, so can make the call whether to preserve or wipe-and-reuse the system.
With my limited knowledge of this far-distant situation, I’d expect mostly it’s better to terminate unknowns in order to create known-positives. However, a whole lot depends on specifics, and the value the AI places on variety and striving or other multi-dimensional aspects, rather than a simple hedonistic metric.