Anticipation should be a tool used in the service of your decision theory. Once you bring in some particular decision theory and utility function, the question is dissolved (if you use TDT and your utility function is just the total quality of simulated observer moments, then you can reverse engineer exactly Nick Bostrom’s notion of “anticipate.” So if I had to go with an answer, that would be mine.)
Do that. Isn’t as straightforward as it perhaps looks, I still have no idea how to approach the problem of anticipation. (Also, “total quality of simulated observer moments”?)
Do you mean try to reverse engineer a notion of anticipation, or try to dissolve the question?
For the first, I mean to define anticipation in terms of what wagers you would make. In this case, how you treat a wager depends on whether having a simulation win the wager causes something good to happen to your utility function in one simulated copy, or in a million of them. Is that fair enough? I don’t see why we care about anticipation at all, except as it bears on our decision making.
I don’t really understand how the second question is difficult. Whatever strategy you choose, you can predict exactly what will happen. So as long as you can compare the outcomes, you know what you should do. If you care about the number of simulated paperclips that are ever created, then you should take an even paperclip bet on whether you won the lottery if the paperclips would be created before the extra simulations are destroyed. Otherwise, you shouldn’t.
(Also, “total quality of simulated observer moments”?)
How do you describe a utility function that cares twice as much what happens to a consciousness which is being simulated twice?
Do that. Isn’t as straightforward as it perhaps looks, I still have no idea how to approach the problem of anticipation. (Also, “total quality of simulated observer moments”?)
Do you mean try to reverse engineer a notion of anticipation, or try to dissolve the question?
For the first, I mean to define anticipation in terms of what wagers you would make. In this case, how you treat a wager depends on whether having a simulation win the wager causes something good to happen to your utility function in one simulated copy, or in a million of them. Is that fair enough? I don’t see why we care about anticipation at all, except as it bears on our decision making.
I don’t really understand how the second question is difficult. Whatever strategy you choose, you can predict exactly what will happen. So as long as you can compare the outcomes, you know what you should do. If you care about the number of simulated paperclips that are ever created, then you should take an even paperclip bet on whether you won the lottery if the paperclips would be created before the extra simulations are destroyed. Otherwise, you shouldn’t.
How do you describe a utility function that cares twice as much what happens to a consciousness which is being simulated twice?