The answer to that depends on how the time machine inside works. If it’s based on a “reset unless a message from the future is received saying not to” sort of deal, then you’re fine. Otherwise, you die. And neither situation has an analoge in the related AI design.
I don’t think it prevents the wireheading scenario that many people consider undesirable. For instance, if an AI modifies everybody into drooling idiots who are made deliriously happy by pressing “Accept Outcome” as often and forcefully as possible, it wins.
The answer to that depends on how the time machine inside works. If it’s based on a “reset unless a message from the future is received saying not to” sort of deal, then you’re fine. Otherwise, you die. And neither situation has an analoge in the related AI design.
I don’t think it prevents the wireheading scenario that many people consider undesirable. For instance, if an AI modifies everybody into drooling idiots who are made deliriously happy by pressing “Accept Outcome” as often and forcefully as possible, it wins.