No, he doesn’t (necessarily). He could prove the inevitable outcome based of elements of the known state of your brain without ever simulating anything. If you read reduction of could you will find a somewhat similar distinction that may make things clearer.
Does he not know the answer to “what will happen after this” with regards to every point in the scenario?
If he doesn’t, is he all-knowing?
If he does know the answer at every point, in what way doesn’t he contain the entire scenario?
EDIT: A non-all-knowing superintelligence could presumably find ways other than simulation of getting my answer, as I said simulation just strikes me as the most probable. If you think I should update my probability estimate of the other methods, that’s a perfectly reasonable objection to my logic re: a non-all-knowing superint.
EDIT: A non-all-knowing superintelligence could presumably find ways other than simulation of getting my answer, as I said simulation just strikes me as the most probable.
Certainly. That is what I consider Omega doing when I think about these problems. It is a useful intuition pump, something we can get our head around.
Does he not know the answer to “what will happen after this” with regards to every point in the scenario?
If he doesn’t, is he all-knowing?
If he does know the answer at every point, in what way doesn’t he contain the entire scenario?
EDIT: A non-all-knowing superintelligence could presumably find ways other than simulation of getting my answer, as I said simulation just strikes me as the most probable. If you think I should update my probability estimate of the other methods, that’s a perfectly reasonable objection to my logic re: a non-all-knowing superint.
Certainly. That is what I consider Omega doing when I think about these problems. It is a useful intuition pump, something we can get our head around.