Whoa. That’s gotta be the most interesting comment I read on LW ever. Did you just give an evolutionary explanation for the concept of probability? If Eliezer’s ideas are madness, yours are ultimate madness. It does sound like it can be correct, though.
But I don’t see how it answers my question. Are you claiming I have no chance of ending up in a rescue sim because I don’t care about it? Then can I start caring about it somehow? Because it sounds like a good idea.
Did you just give an evolutionary explanation for the concept of probability?
It is much worse, this seems to be an evolutionary “explanation” for, say, particle physics, and I can’t yet get through the resulting cognitive dissonance. This can’t be right.
Yep, I saw the particle physics angle immediately too, but I saw it as less catastrophic than probability, not more :-) Let’s work it out here. I’ll try to think of more stupid-sounding questions, because they seemed to be useful to you in the past.
As applied to your comment, it means that you can only use observations epistemically where you expect to be existing according to the concept of anticipated experience as coded by evolution. Where you are instantiated by artificial devices like rescue simulations, these situations don’t map on anticipated experience, so observations remembered in those states don’t reveal your prior, and can’t be used to learn how things actually are (how your prior actually is).
You can’t change what you anticipate, because you can’t change your mind that precisely, but changing what you anticipate isn’t fundamental and doesn’t change what will actually happen—everything “actually happens” in some sense, you just care about different things to different degree. And you certainly don’t want to change what you care about (and in a sense, can’t: the changed thing won’t be what you care about, it will be something else). (Here, “caring” is used to refer to preference, and not anticipation.)
Before I dig into it formally, let’s skim the surface some more. Do you also think Rolf Nelson’s AI deterrence won’t work? Or are sims only unusable on humans?
I think this might get dangerously close to the banned territory, and our Friendly dictator will close the whole thread. Though since it wasn’t clarified what exactly is banned, I’ll go ahead and discuss acausal trade in general until it’s explicitly ruled banned as well.
As discussed before, “AI deterrence” is much better thought of as participation in acausal multiverse economy, but it probably takes a much more detailed knowledge of your preference than humans possess to make the necessary bead jar guesses to make your moves in the global game. This makes it doubtful that it’s possible on human level, since the decision problem deteriorates into a form of Pascal’s Wager (without infinities, but with quantities outside the usual ranges and too difficult to estimate, while precision is still important).
ETA: And sims are certainly “usable” for humans, they produce some goodness, but maybe less so than something else. That they aren’t subjectively anticipated, doesn’t make them improbable, in case you actually build them. Subjective anticipation is not a very good match for prior, it only tells you a general outline, sometimes in systematic error.
If you haven’t already, read BLIT. I’m feeling rather like the protagonist.
Every additional angle, no matter how indirect, gets me closer to seeing that which I Must Not Understand. Though I’m taking it on faith that this is the case, I have reason to think the faith isn’t misplaced. It’s a very disturbing experience.
I think I’ll go read another thread now. Or wait, better yet, watch anime. There’s no alcohol in the house..
I managed to parse about half of your second paragraph, but it seems you didn’t actually answer the question. Let me rephrase.
You say that sims probably won’t work on humans because our “preference” is about this universe only, or something like that. When we build an AI, can we specify its “preference” in a similar way, so it only optimizes “our” universe and doesn’t participate in sim trades/threats? (Putting aside the question whether we want to do that.)
I don’t believe that anticipated experience in natural situations as an accidental (specific to human psychology) way for eliciting prior was previously discussed, though general epistemic uselessness of observations for artificial agents is certainly an old idea.
Whoa. That’s gotta be the most interesting comment I read on LW ever. Did you just give an evolutionary explanation for the concept of probability? If Eliezer’s ideas are madness, yours are ultimate madness. It does sound like it can be correct, though.
But I don’t see how it answers my question. Are you claiming I have no chance of ending up in a rescue sim because I don’t care about it? Then can I start caring about it somehow? Because it sounds like a good idea.
It is much worse, this seems to be an evolutionary “explanation” for, say, particle physics, and I can’t yet get through the resulting cognitive dissonance. This can’t be right.
Yep, I saw the particle physics angle immediately too, but I saw it as less catastrophic than probability, not more :-) Let’s work it out here. I’ll try to think of more stupid-sounding questions, because they seemed to be useful to you in the past.
As applied to your comment, it means that you can only use observations epistemically where you expect to be existing according to the concept of anticipated experience as coded by evolution. Where you are instantiated by artificial devices like rescue simulations, these situations don’t map on anticipated experience, so observations remembered in those states don’t reveal your prior, and can’t be used to learn how things actually are (how your prior actually is).
You can’t change what you anticipate, because you can’t change your mind that precisely, but changing what you anticipate isn’t fundamental and doesn’t change what will actually happen—everything “actually happens” in some sense, you just care about different things to different degree. And you certainly don’t want to change what you care about (and in a sense, can’t: the changed thing won’t be what you care about, it will be something else). (Here, “caring” is used to refer to preference, and not anticipation.)
Before I dig into it formally, let’s skim the surface some more. Do you also think Rolf Nelson’s AI deterrence won’t work? Or are sims only unusable on humans?
I think this might get dangerously close to the banned territory, and our Friendly dictator will close the whole thread. Though since it wasn’t clarified what exactly is banned, I’ll go ahead and discuss acausal trade in general until it’s explicitly ruled banned as well.
As discussed before, “AI deterrence” is much better thought of as participation in acausal multiverse economy, but it probably takes a much more detailed knowledge of your preference than humans possess to make the necessary bead jar guesses to make your moves in the global game. This makes it doubtful that it’s possible on human level, since the decision problem deteriorates into a form of Pascal’s Wager (without infinities, but with quantities outside the usual ranges and too difficult to estimate, while precision is still important).
ETA: And sims are certainly “usable” for humans, they produce some goodness, but maybe less so than something else. That they aren’t subjectively anticipated, doesn’t make them improbable, in case you actually build them. Subjective anticipation is not a very good match for prior, it only tells you a general outline, sometimes in systematic error.
If you haven’t already, read BLIT. I’m feeling rather like the protagonist.
Every additional angle, no matter how indirect, gets me closer to seeing that which I Must Not Understand. Though I’m taking it on faith that this is the case, I have reason to think the faith isn’t misplaced. It’s a very disturbing experience.
I think I’ll go read another thread now. Or wait, better yet, watch anime. There’s no alcohol in the house..
I managed to parse about half of your second paragraph, but it seems you didn’t actually answer the question. Let me rephrase.
You say that sims probably won’t work on humans because our “preference” is about this universe only, or something like that. When we build an AI, can we specify its “preference” in a similar way, so it only optimizes “our” universe and doesn’t participate in sim trades/threats? (Putting aside the question whether we want to do that.)
This has been much discussed on LW. Search for “updateless decision theory” and “UDT.”
I don’t believe that anticipated experience in natural situations as an accidental (specific to human psychology) way for eliciting prior was previously discussed, though general epistemic uselessness of observations for artificial agents is certainly an old idea.