Ok, break this down a bit for me—I’m just a simple biological entity, with much more limited predictive powers.
It’s worth simulating a vast number of possible minds which might, in some information -adjacent regions of a ‘mathematical universe’ be likely to be in a position to create you
This either well beyond my understanding, or is sleight-of-hand regarding identity and use of “you”. It might help to label entities. Entity A has the ability to emulate and control entity B. It thinks that somehow its control over entity B is influential over entity C in the distant past or imaginary mathematical construct, who it wishes would create entity D in that disconnected timeline.
Nope, I can’t give this any causal weight to my decisions.
Unfortunately I had to change what A, B and C correspond to slightly, because of the fact that the simulation the basilisk does is not analogous to the simulation done by Omega in Newcomb’s problem.
Let’s say entity A is you in Newcomb’s problem, while entity C is Omega and entity B is Omega’s simulation of you. Even though the decision to place, or not place, money in boxes has already been made in physical time by the point when the decision to open one or all of them is made, in ‘logical time’, both decisions are contingent on the same decision, “Given that I don’t know whether I’m physical or simulated, is it in my collective best interest to do the thing which resembles opening one or both boxes?” which is made at once by the same decision function which happens to be run twice in the physical world.
I am concerned that the Roko’s basilisk scenario is isomorphic to Newcomb’s problem:
The human thinking about the basilisk is like Omega (albeit not omniscient). The Basilisk itself is like you in Newcomb’s problem, in that it thinks thoughts which acausally influence behavior in the past because the thing making the decision isn’t either you or the basilisk, it’s the decision algorithm running on both of you.
Omega’s simulation is like the blackmailed human’s inadvertent thinking about the basilisk and the logic of the situation. Now I agree that the fact the human isn’t exactly Omega makes them less able to blackmail themselves for certain, but I don’t know that this rules it out.
thanks for the conversation, I’m bowing out here. I’ll read further comments, but (probably) not respond. I suspect we have a crux somewhere around identification of actors, and mechanisms of bridging causal responsibility for acausal (imagined) events, but I think there’s an inferential gap where you and I have divergent enough priors and models that we won’t be able to agree on them.
Then I have to thank you but say that this conversation has done absolutely nothing to help me understand why I might be wrong, which of course I hope I am. This comment is really directed at all the people who disagree-voted me, in the hope that they might explain why.
Ok, break this down a bit for me—I’m just a simple biological entity, with much more limited predictive powers.
This either well beyond my understanding, or is sleight-of-hand regarding identity and use of “you”. It might help to label entities. Entity A has the ability to emulate and control entity B. It thinks that somehow its control over entity B is influential over entity C in the distant past or imaginary mathematical construct, who it wishes would create entity D in that disconnected timeline.
Nope, I can’t give this any causal weight to my decisions.
Unfortunately I had to change what A, B and C correspond to slightly, because of the fact that the simulation the basilisk does is not analogous to the simulation done by Omega in Newcomb’s problem.
Let’s say entity A is you in Newcomb’s problem, while entity C is Omega and entity B is Omega’s simulation of you. Even though the decision to place, or not place, money in boxes has already been made in physical time by the point when the decision to open one or all of them is made, in ‘logical time’, both decisions are contingent on the same decision, “Given that I don’t know whether I’m physical or simulated, is it in my collective best interest to do the thing which resembles opening one or both boxes?” which is made at once by the same decision function which happens to be run twice in the physical world.
I am concerned that the Roko’s basilisk scenario is isomorphic to Newcomb’s problem:
The human thinking about the basilisk is like Omega (albeit not omniscient). The Basilisk itself is like you in Newcomb’s problem, in that it thinks thoughts which acausally influence behavior in the past because the thing making the decision isn’t either you or the basilisk, it’s the decision algorithm running on both of you.
Omega’s simulation is like the blackmailed human’s inadvertent thinking about the basilisk and the logic of the situation. Now I agree that the fact the human isn’t exactly Omega makes them less able to blackmail themselves for certain, but I don’t know that this rules it out.
thanks for the conversation, I’m bowing out here. I’ll read further comments, but (probably) not respond. I suspect we have a crux somewhere around identification of actors, and mechanisms of bridging causal responsibility for acausal (imagined) events, but I think there’s an inferential gap where you and I have divergent enough priors and models that we won’t be able to agree on them.
Then I have to thank you but say that this conversation has done absolutely nothing to help me understand why I might be wrong, which of course I hope I am. This comment is really directed at all the people who disagree-voted me, in the hope that they might explain why.