Worse than an unaligned AGI

So the worst case scenario being considered here is that an unaligned superintelligent AI will emerge sooner or later and intentionally or incidentally kill off humanity on its way to paper-clip the universe one way or another. I would argue that it’s not the worst possibility, since somewhere in the memory banks of that AI will be the memories of humanity, preserved forever, and possibly even simulated in some way, who knows. A much worse case would be an AI that is powerful enough to destroy humans, but not smart enough to preserve itself. And so soon after the Earthlings get wiped out, the AI stops functioning or self-destructs, and with it all memories and traces of us. I would argue that it is not an unlikely possibility. Indeed, intentional self-preservation is not guaranteed by any means, and there are plenty of examples of even human societies Easter-Islanding themselves into oblivion.

So, somewhat in the spirit of “dying with dignity”, I wonder if at some point when and if a superintelligent AI appears inevitable, it would make sense to put an effort into making it at least not too dumb to die, intentionally or accidentally.

Edit: Reading the comments, it looks like my concept of what is worse and what is better is not quite aligned with that of some other humans.