I came up with a version of this a while ago (can’t remember if I posted it) where Omega is going to (possibly) put a diamond in a box, and it has predicted the probability with which you expect there to be one, and it then uses that probability to decide pseudorandomly whether to put the diamond in the box.
I would say that a self-modifying timeless agent would immediately modify itself to actually anticipate the diamond being there with near-100% probability, open the box, take the diamond, and revert the modification. (Before applying the modification, it would prove that it would revert itself afterwards, of course.) And I would say that a human can’t win as easily on this, since for the most part we can’t deliberately make ourselves believe things, though we often believewecan. Although this specific type of situation would not literally be self-deception, it would overall be a dangerous ability for a human to permit themselves to use; I don’t think I’d want to acquire it, all other things being equal.
I came up with a version of this a while ago (can’t remember if I posted it) where Omega is going to (possibly) put a diamond in a box, and it has predicted the probability with which you expect there to be one, and it then uses that probability to decide pseudorandomly whether to put the diamond in the box.
I would say that a self-modifying timeless agent would immediately modify itself to actually anticipate the diamond being there with near-100% probability, open the box, take the diamond, and revert the modification. (Before applying the modification, it would prove that it would revert itself afterwards, of course.) And I would say that a human can’t win as easily on this, since for the most part we can’t deliberately make ourselves believe things, though we often believe we can. Although this specific type of situation would not literally be self-deception, it would overall be a dangerous ability for a human to permit themselves to use; I don’t think I’d want to acquire it, all other things being equal.