Does anyone have any compelling arguments on why an FAI would or would not recreate me if I die, decompose, and then the singularity occurs a long time after my death?
A possible reason why not besides physical impossibility: are you sure you want to be recreated?---I don’t mean to invoke any tired cishumanist tropes about death being the natural order of things or any nonsense like that. But it seems plausible to me that a benevolent superintelligence would have no trouble contriving configurations of matter that at least some of us would prefer to personal survival; specifically, the AI could create people better than us by our own standards.
Presumably you don’t want to live on exactly as you are now; you have a desire for self-improvement. But given the impossibility of ontologically fundamental mental entities, there’s an identity between “self-improvement” and “being deleted and immediately replaced with someone else who happens to be very similar in these-and-such ways”; once you know what experiences are taking place at what points in spacetime, there’s no further fact of the matter as to which ones of them are “you.” In our own world, this is an unimportant philosophical nicety, but superintelligence has the potential to open up previously inaccessible edge cases, such as extremely desirable outcomes that don’t preserve enough personal identity to count as “survival” in the conventional sense.
A possible reason why not besides physical impossibility: are you sure you want to be recreated?---I don’t mean to invoke any tired cishumanist tropes about death being the natural order of things or any nonsense like that. But it seems plausible to me that a benevolent superintelligence would have no trouble contriving configurations of matter that at least some of us would prefer to personal survival; specifically, the AI could create people better than us by our own standards.
Presumably you don’t want to live on exactly as you are now; you have a desire for self-improvement. But given the impossibility of ontologically fundamental mental entities, there’s an identity between “self-improvement” and “being deleted and immediately replaced with someone else who happens to be very similar in these-and-such ways”; once you know what experiences are taking place at what points in spacetime, there’s no further fact of the matter as to which ones of them are “you.” In our own world, this is an unimportant philosophical nicety, but superintelligence has the potential to open up previously inaccessible edge cases, such as extremely desirable outcomes that don’t preserve enough personal identity to count as “survival” in the conventional sense.