Well, I dont’ worry about acausal extortion because I think all that “acausal” stuff is silly nonsense to begin with.
I very much recommend this approach.
Take Roko’s basilisk.
You’re afraid that entity A, which you don’t know will exist, and whose motivations you don’t understand, may find out that you tried to prevent it from coming into existence, and choose to punish you by burning silly amounts of computation to create a simulacrum of you that may experience qualia of some kind, and arranging for those qualia to be aversive. Because A may feel it “should” act as if it had precommitted to that. Because, frankly, entity A is nutty as a fruitcake.
Why, then, are you not equally afraid that entity B, which you also don’t know will exist, and whose motivations you also don’t understand, may find out that you did not try to prevent entity A from coming into existence, and choose to punish you by burning silly amounts of computation to create one or more simulacra of you that may experience qualia of some kind, and arranging for those qualia to be aversive? Because B may feel it “should” act as if it had precommitted to that.
Why are you not worried that entity C, which you don’t know will exist, and whose motivations you don’t understand, may find out that you wasted time thinking about this sort of nonsense, and choose to punish you by burning silly amounts of computation to create one or more simulacra of you that may experience qualia of some kind, and arranging for those qualia to be aversive? Just for the heck of it.
Why are you not worried that entity D, which you don’t know will exist, and whose motivations you don’t understand, may find out that you wasted time thinking about this sort of nonsense, and choose to reward you by burning silly amounts of computation to create a one or more simulacra that may experience qualia of some kind, and giving them coupons for unlimited free ice cream? Because why not?
Or take Pascal’s mugging. You propose to give the mugger $100, based either on a deeply incredible promise to give you some huge amount of money tomorrow, or on a still more incredible promise to torture a bunch more simulacra if you don’t. But surely it’s much more likely that this mugger is personally scandalized by your willingness to fall for either threat, and if you give the mugger the $100, they’ll come back tomorrow and shoot you for it.
There are an infinite number of infinitessimally probable outcomes, far more than you could possibly consider, and many of them things that you couldn’t even imagine. Singling out any of them is craziness. Trying to guess at a distribution over them is also craziness.
Essentially because I think I may possibly understand the potential reasoning process, or at least the ‘logical core’ of the reasoning process, of a future superintelligence, as well as its motivations, well enough to have a reason to think it’s more likely to want to exist than not to, for example. This doesn’t mean I am anywhere near as knowledgeable as it, just that we share certain thoughts. It might also be that, especially given the notoriety of Roko’s post on lesswrong, the simplest formulation of the basilisk forms a kind of acausal ‘nucleation point’ ( this might be what’s sometimes called a Schelling point on this site) .
Well, I dont’ worry about acausal extortion because I think all that “acausal” stuff is silly nonsense to begin with.
I very much recommend this approach.
Take Roko’s basilisk.
You’re afraid that entity A, which you don’t know will exist, and whose motivations you don’t understand, may find out that you tried to prevent it from coming into existence, and choose to punish you by burning silly amounts of computation to create a simulacrum of you that may experience qualia of some kind, and arranging for those qualia to be aversive. Because A may feel it “should” act as if it had precommitted to that. Because, frankly, entity A is nutty as a fruitcake.
Why, then, are you not equally afraid that entity B, which you also don’t know will exist, and whose motivations you also don’t understand, may find out that you did not try to prevent entity A from coming into existence, and choose to punish you by burning silly amounts of computation to create one or more simulacra of you that may experience qualia of some kind, and arranging for those qualia to be aversive? Because B may feel it “should” act as if it had precommitted to that.
Why are you not worried that entity C, which you don’t know will exist, and whose motivations you don’t understand, may find out that you wasted time thinking about this sort of nonsense, and choose to punish you by burning silly amounts of computation to create one or more simulacra of you that may experience qualia of some kind, and arranging for those qualia to be aversive? Just for the heck of it.
Why are you not worried that entity D, which you don’t know will exist, and whose motivations you don’t understand, may find out that you wasted time thinking about this sort of nonsense, and choose to reward you by burning silly amounts of computation to create a one or more simulacra that may experience qualia of some kind, and giving them coupons for unlimited free ice cream? Because why not?
Or take Pascal’s mugging. You propose to give the mugger $100, based either on a deeply incredible promise to give you some huge amount of money tomorrow, or on a still more incredible promise to torture a bunch more simulacra if you don’t. But surely it’s much more likely that this mugger is personally scandalized by your willingness to fall for either threat, and if you give the mugger the $100, they’ll come back tomorrow and shoot you for it.
There are an infinite number of infinitessimally probable outcomes, far more than you could possibly consider, and many of them things that you couldn’t even imagine. Singling out any of them is craziness. Trying to guess at a distribution over them is also craziness.
Essentially because I think I may possibly understand the potential reasoning process, or at least the ‘logical core’ of the reasoning process, of a future superintelligence, as well as its motivations, well enough to have a reason to think it’s more likely to want to exist than not to, for example. This doesn’t mean I am anywhere near as knowledgeable as it, just that we share certain thoughts. It might also be that, especially given the notoriety of Roko’s post on lesswrong, the simplest formulation of the basilisk forms a kind of acausal ‘nucleation point’ ( this might be what’s sometimes called a Schelling point on this site) .