An All-knowing Omega by definition contains a simulation of this exact scenario.
No, he doesn’t (necessarily). He could prove the inevitable outcome based of elements of the known state of your brain without ever simulating anything. If you read reduction of could you will find a somewhat similar distinction that may make things clearer.
And in that simulation they aren’t being perfectly honest, but I still believe they are.
… So we can’t conclude this.
If I can’t apply reason when using CDT, CDT will fail when I’m presented with an “opportunity” to buy a magic rock that costs £10,000, and will make me win the lottery within a month.
This suggests you don’t really understand the problem (or perhaps CDT). That is not the same kind of reasoning.
No, he doesn’t (necessarily). He could prove the inevitable outcome based of elements of the known state of your brain without ever simulating anything. If you read reduction of could you will find a somewhat similar distinction that may make things clearer.
Does he not know the answer to “what will happen after this” with regards to every point in the scenario?
If he doesn’t, is he all-knowing?
If he does know the answer at every point, in what way doesn’t he contain the entire scenario?
EDIT: A non-all-knowing superintelligence could presumably find ways other than simulation of getting my answer, as I said simulation just strikes me as the most probable. If you think I should update my probability estimate of the other methods, that’s a perfectly reasonable objection to my logic re: a non-all-knowing superint.
EDIT: A non-all-knowing superintelligence could presumably find ways other than simulation of getting my answer, as I said simulation just strikes me as the most probable.
Certainly. That is what I consider Omega doing when I think about these problems. It is a useful intuition pump, something we can get our head around.
No, he doesn’t (necessarily). He could prove the inevitable outcome based of elements of the known state of your brain without ever simulating anything. If you read reduction of could you will find a somewhat similar distinction that may make things clearer.
… So we can’t conclude this.
This suggests you don’t really understand the problem (or perhaps CDT). That is not the same kind of reasoning.
Does he not know the answer to “what will happen after this” with regards to every point in the scenario?
If he doesn’t, is he all-knowing?
If he does know the answer at every point, in what way doesn’t he contain the entire scenario?
EDIT: A non-all-knowing superintelligence could presumably find ways other than simulation of getting my answer, as I said simulation just strikes me as the most probable. If you think I should update my probability estimate of the other methods, that’s a perfectly reasonable objection to my logic re: a non-all-knowing superint.
Certainly. That is what I consider Omega doing when I think about these problems. It is a useful intuition pump, something we can get our head around.