It’s worth putting a number on that, and a different one (or possibly the same; I personally think my chances of being resurrected and tortured vary by epsilon based on my own actions in life—if the gods will it, it will happen, if they don’t, it won’t) based on the two main actions you’re considering actually performing.
For me, that number is inestimably tiny. I suspect a fairly high neuroticism and irrational failure to limit the sum of their probabilities to 1 of anyone who thinks it’s significant.
“I suspect a fairly high neuroticism and irrational failure to limit the sum of their probabilities to 1 of anyone who thinks it’s significant.” Why? What justifies your infinitesimal value?
I find it very difficult to estimate probabilities like this, but I expect the difference in the probability of something significant happening if I do something in response to the basilisk and the probability of that happening if I don’t, is almost certainly in excess of 1/1000 or even 1⁄100. This is within the range where I think it makes sense to take it seriously. (And this is why I asked this question.)
I have a very hard time even justifying 1/1000. 1/10B is closer to my best guess (plus or minus 2 orders of magnitude). It requires a series of very unlikely events: 1) enough of my brain-state is recorded that I COULD be resurrected 2) the imagined god finds it worthwhile to simulate me 3) the imagined god is angry at my specific actions (or lack thereof) enough to torture me rather than any other value it could get from the simulation. 4) the imagined god has a decision process that includes anger or some other non-goal-directed motivation for torturing someone who can no longer have any effect on the universe. 5) no other gods have better things to do with the resources, and stop the angry one from wasting time.
Note, even if you relax 1 and 2 so the putative deity punishes RANDOM simulated people (because you’re actually dead and gone) to punish YOU specifically, it still doesn’t make it likely at all.
You’re imagining a very different scenario from me. I worry that:
It’s worth simulating a vast number of possible minds which might, in some information -adjacent regions of a ‘mathematical universe’ be likely to be in a position to create you, from a purely amoral point of view. This means you don’t need to simulate them exactly, only to the level of fidelity at which they can’t tell whether they’re being simulated (and in any case, I don’t have the same level of certainty that it couldn’t gather enough information about me to simulate me exactly). Maybe I’m an imperfect simulation of another person. I wouldn’t know, because I’m not that person.
“the imagined god is angry at my specific actions (or lack thereof) enough to torture me rather than any other value it could get from the simulation.” I don’t think it needs to be angry, or a god. It just needs to understand the (I fear sound) logic involved, which Eliezer yudkowsky took semi-seriously.
“4) the imagined god has a decision process that includes anger or some other non-goal-directed motivation for torturing someone who can no longer have any effect on the universe.”
It wouldn’t need to be non-goal directed.
“no other gods have better things to do with the resources, and stop the angry one from wasting time.” What if there are no ‘other gods’? This seems likely in the small region of the ’logical/platonic universe containing this physical one.
Ok, break this down a bit for me—I’m just a simple biological entity, with much more limited predictive powers.
It’s worth simulating a vast number of possible minds which might, in some information -adjacent regions of a ‘mathematical universe’ be likely to be in a position to create you
This either well beyond my understanding, or is sleight-of-hand regarding identity and use of “you”. It might help to label entities. Entity A has the ability to emulate and control entity B. It thinks that somehow its control over entity B is influential over entity C in the distant past or imaginary mathematical construct, who it wishes would create entity D in that disconnected timeline.
Nope, I can’t give this any causal weight to my decisions.
Unfortunately I had to change what A, B and C correspond to slightly, because of the fact that the simulation the basilisk does is not analogous to the simulation done by Omega in Newcomb’s problem.
Let’s say entity A is you in Newcomb’s problem, while entity C is Omega and entity B is Omega’s simulation of you. Even though the decision to place, or not place, money in boxes has already been made in physical time by the point when the decision to open one or all of them is made, in ‘logical time’, both decisions are contingent on the same decision, “Given that I don’t know whether I’m physical or simulated, is it in my collective best interest to do the thing which resembles opening one or both boxes?” which is made at once by the same decision function which happens to be run twice in the physical world.
I am concerned that the Roko’s basilisk scenario is isomorphic to Newcomb’s problem:
The human thinking about the basilisk is like Omega (albeit not omniscient). The Basilisk itself is like you in Newcomb’s problem, in that it thinks thoughts which acausally influence behavior in the past because the thing making the decision isn’t either you or the basilisk, it’s the decision algorithm running on both of you.
Omega’s simulation is like the blackmailed human’s inadvertent thinking about the basilisk and the logic of the situation. Now I agree that the fact the human isn’t exactly Omega makes them less able to blackmail themselves for certain, but I don’t know that this rules it out.
thanks for the conversation, I’m bowing out here. I’ll read further comments, but (probably) not respond. I suspect we have a crux somewhere around identification of actors, and mechanisms of bridging causal responsibility for acausal (imagined) events, but I think there’s an inferential gap where you and I have divergent enough priors and models that we won’t be able to agree on them.
Then I have to thank you but say that this conversation has done absolutely nothing to help me understand why I might be wrong, which of course I hope I am. This comment is really directed at all the people who disagree-voted me, in the hope that they might explain why.
It’s worth putting a number on that, and a different one (or possibly the same; I personally think my chances of being resurrected and tortured vary by epsilon based on my own actions in life—if the gods will it, it will happen, if they don’t, it won’t) based on the two main actions you’re considering actually performing.
For me, that number is inestimably tiny. I suspect a fairly high neuroticism and irrational failure to limit the sum of their probabilities to 1 of anyone who thinks it’s significant.
“I suspect a fairly high neuroticism and irrational failure to limit the sum of their probabilities to 1 of anyone who thinks it’s significant.” Why? What justifies your infinitesimal value?
I find it very difficult to estimate probabilities like this, but I expect the difference in the probability of something significant happening if I do something in response to the basilisk and the probability of that happening if I don’t, is almost certainly in excess of 1/1000 or even 1⁄100. This is within the range where I think it makes sense to take it seriously. (And this is why I asked this question.)
I have a very hard time even justifying 1/1000. 1/10B is closer to my best guess (plus or minus 2 orders of magnitude). It requires a series of very unlikely events:
1) enough of my brain-state is recorded that I COULD be resurrected
2) the imagined god finds it worthwhile to simulate me
3) the imagined god is angry at my specific actions (or lack thereof) enough to torture me rather than any other value it could get from the simulation.
4) the imagined god has a decision process that includes anger or some other non-goal-directed motivation for torturing someone who can no longer have any effect on the universe.
5) no other gods have better things to do with the resources, and stop the angry one from wasting time.
Note, even if you relax 1 and 2 so the putative deity punishes RANDOM simulated people (because you’re actually dead and gone) to punish YOU specifically, it still doesn’t make it likely at all.
You’re imagining a very different scenario from me. I worry that:
It’s worth simulating a vast number of possible minds which might, in some information -adjacent regions of a ‘mathematical universe’ be likely to be in a position to create you, from a purely amoral point of view. This means you don’t need to simulate them exactly, only to the level of fidelity at which they can’t tell whether they’re being simulated (and in any case, I don’t have the same level of certainty that it couldn’t gather enough information about me to simulate me exactly). Maybe I’m an imperfect simulation of another person. I wouldn’t know, because I’m not that person.
“the imagined god is angry at my specific actions (or lack thereof) enough to torture me rather than any other value it could get from the simulation.” I don’t think it needs to be angry, or a god. It just needs to understand the (I fear sound) logic involved, which Eliezer yudkowsky took semi-seriously.
“4) the imagined god has a decision process that includes anger or some other non-goal-directed motivation for torturing someone who can no longer have any effect on the universe.”
It wouldn’t need to be non-goal directed.
“no other gods have better things to do with the resources, and stop the angry one from wasting time.” What if there are no ‘other gods’? This seems likely in the small region of the ’logical/platonic universe containing this physical one.
Ok, break this down a bit for me—I’m just a simple biological entity, with much more limited predictive powers.
This either well beyond my understanding, or is sleight-of-hand regarding identity and use of “you”. It might help to label entities. Entity A has the ability to emulate and control entity B. It thinks that somehow its control over entity B is influential over entity C in the distant past or imaginary mathematical construct, who it wishes would create entity D in that disconnected timeline.
Nope, I can’t give this any causal weight to my decisions.
Unfortunately I had to change what A, B and C correspond to slightly, because of the fact that the simulation the basilisk does is not analogous to the simulation done by Omega in Newcomb’s problem.
Let’s say entity A is you in Newcomb’s problem, while entity C is Omega and entity B is Omega’s simulation of you. Even though the decision to place, or not place, money in boxes has already been made in physical time by the point when the decision to open one or all of them is made, in ‘logical time’, both decisions are contingent on the same decision, “Given that I don’t know whether I’m physical or simulated, is it in my collective best interest to do the thing which resembles opening one or both boxes?” which is made at once by the same decision function which happens to be run twice in the physical world.
I am concerned that the Roko’s basilisk scenario is isomorphic to Newcomb’s problem:
The human thinking about the basilisk is like Omega (albeit not omniscient). The Basilisk itself is like you in Newcomb’s problem, in that it thinks thoughts which acausally influence behavior in the past because the thing making the decision isn’t either you or the basilisk, it’s the decision algorithm running on both of you.
Omega’s simulation is like the blackmailed human’s inadvertent thinking about the basilisk and the logic of the situation. Now I agree that the fact the human isn’t exactly Omega makes them less able to blackmail themselves for certain, but I don’t know that this rules it out.
thanks for the conversation, I’m bowing out here. I’ll read further comments, but (probably) not respond. I suspect we have a crux somewhere around identification of actors, and mechanisms of bridging causal responsibility for acausal (imagined) events, but I think there’s an inferential gap where you and I have divergent enough priors and models that we won’t be able to agree on them.
Then I have to thank you but say that this conversation has done absolutely nothing to help me understand why I might be wrong, which of course I hope I am. This comment is really directed at all the people who disagree-voted me, in the hope that they might explain why.