I think I finally got clear understanding of Newcomb-like problems. I am afraid that I again will think about some EY’s post which I read and forgot. But just write it will be faster, than search.
I think the cause of why people stuck with these problems are wrong intuitions that your decisions “change the future”. Which are obviously wrong if you think about it, it’s not like in Past there was one Future, but now now in Present and Future there are another Future.
It’s wrong like think that if you run computer program, in the process of calculation it will “change it’s future outcome”. Next step cause future steps, but they themselves were caused by previous steps of execution. And your decisions cause, determine the future, but they were themselves caused by past, and there always was just one stream of causes.
The same way, if you decide in the future “one box” and from that decision expect that Omega put 1M, it doesn’t mean that your decision “changed the past”, not any more that it changed the future. There is just previous state of the world (including you) which synchronously causes you real choice and choice in Omega prediction.
So, probably it makes me think more EDT than TDT?
It’s not about being “kind of agent” from my view, it’s just some strange crutch. It’s more thinking that there was some full system state, some code, which fully caused you to current state, and this code is executed in some other place, and because the system is fully deterministic, the results of execution will always match.
And then if you know that you’ve done something, you immediately can predict from that evidence that other copy have done the same, because if one copy of deterministic algorithm calculated some expression, you can generally predict that calculation of this expression always return answer X, so everyone who calculated it will get the same answer X. And if you know you calculated it and got Y, you know the answer is X=Y.
So in an expectation of the future where you pick one box, you can expect that your future version will immediately update into that Omega simulation in past also picked that and that caused Omega to put 1M. And vice versa, in an expectation of the future where you pick two boxes, you can update about Omega simulation choice and lack of 1M.
No time travel. Causes indeed go between your decision and past boxes choice. But they go from the past, your decision not only causes things, but WAS caused itself. And the same causes will make both your decision and box content.
I think it would be more obvious if question was starting from Omega visible to you beginning to calculate prediction, hiddenly making a setup and THEN giving you a choice. And not just “everything already happened, choose”.
There is a problem though with such thinking that when you tell people that they are fully determined, they start to think that their future is determined not through their ongoing thinking process, but independently from that.
I don’t know what to think except: but people who think they can determine the future evidently get better results then those who think that everything is doomed, so definitely you can determine the future by trying to do that, even you don’t know how to match that with being determined yourself.
Or maybe: if you now know about being determined and started to think “how me making make future decision usual way is determined by past”, it’s wrong, it’s more levels of putting calculations inside of calculations, so you have less power to spend on making good decision. And of course don’t think “how my current future is determined by my past including this thought”, it’s infinity loop.
Though in the first place I think about it like: if you have examples of previous games and one boxes get better results, then in two future predictions there is one with higher expected utility and it’s the one where you decide to pick one box, so decision of picking one box has higher expected utility, so do it.
Or if elaborate, choice of one box is an evidence in our examples for getting 1M. But if you will choose one box, you will not GET this evidence, you will CREATE it. And usually if you do have some evident measure and start to optimise it, it ceases to be evident.
But if we had examples of people who optimised measure and still get the results, then it’s still will be valid evidence even if you will optimise the measure. So prediction which starts from picking one box still has better expected utility and is the one you should choose to start by picking one box.
And I prefer to think about whole situation like our world is simulation, Omega copies the computer state from one point, continue before it can see your decision, and next execute copied state the second time, but now puts boxes conditional on your choice from previous run. And in sim 3 uses your choice from sim 2. And so on 1B times.
For me it’s obvious then that you should choose one box, because you can’t by running the same simulation state get different results on the moment where simulated you makes your choice.
(And when I was trying to think about choice which change the past, and determinism, free will and decisions, my brain felt like it’s trying to fold into a pretzel. And I make conclusion, that it’s better to never actually act on logic which makes your brain turn into a pretzel, even if it is “perfectly physically grounded”. If you have some “perfectly physically grounded logic” you need to unfold it before it become obvious, intuitive and fully visible, because elsewhise you’ll just end up making mistakes.)
A lot of free will confusions are sidestepped by framing decisions so that the agent thinks of itself as “I am an algorithm” rather than “I am a physical object”. This works well for bounded individual decisions (rather than for long stretches of activity in the world), and the things that happen in the physical world can then be thought of as instantiations of the algorithm and its resulting decision, which the algorithm controls from its abstract headquarters that are outside of physical worlds and physical time.
For example, this way you don’t control the past or the future, because the abstract algorithm is not located at some specific time, and all instances of it at various times within the physical world are related to the abstract algorithm in a similar way. For coordination of multiple possible worlds, an abstract algorithm is not anchored to a specific world, and so there is no additional conceptual strangeness of controlling one possible world from another, because in this framing you instead control both from the same algorithm that is not intrinsically part of either of them. There are also thought experiments where existence of an instance of the decision maker in some world depends on their own decision (so that for some possible decisions, the instance never existed in the first place), and extracting the decision making into an algorithm that’s unbothered by nonexistence of its instances in real worlds makes this more straightforward.
Somewhat similarly, one of the most useful shifts in identity to apply it to more cases usefully is to take the view that “My identity is an algorithm/function” as the more fundamental primitive/general case, and view the idea that “My identity is a physical object” is a useful special case, but the physicalist view of identity cannot hold in certain regimes.
The shift from a physical view to an algorithmic view of an identity answers/dissolves/sidesteps a lot of confusing questions about what happens to identity.
(It’s also possible that identity in a sense is basically a fiction, but that’s another question entirely)
I am suspicious to and don’t like using some weird sidesteppings, instead of not being confused while looking on the question from the position of “how it will actually look in the world/situation” (though they can be faster, yeah).
I mean, causes are real, future was caused by you, may be say controlled, and it less feels like controlling something if somebody predicts by wishes and performs them before I can think about their fulfilling.
But these are probably just trade-offs of trying to explain these things to people in plain English.
When I first thought that picking actions by conditional expected utility was obviously correct, I was very confused about the whole DTs situation. So link was very useful, thanks.
I think I finally got clear understanding of Newcomb-like problems. I am afraid that I again will think about some EY’s post which I read and forgot. But just write it will be faster, than search.
I think the cause of why people stuck with these problems are wrong intuitions that your decisions “change the future”. Which are obviously wrong if you think about it, it’s not like in Past there was one Future, but now now in Present and Future there are another Future.
It’s wrong like think that if you run computer program, in the process of calculation it will “change it’s future outcome”. Next step cause future steps, but they themselves were caused by previous steps of execution. And your decisions cause, determine the future, but they were themselves caused by past, and there always was just one stream of causes.
The same way, if you decide in the future “one box” and from that decision expect that Omega put 1M, it doesn’t mean that your decision “changed the past”, not any more that it changed the future. There is just previous state of the world (including you) which synchronously causes you real choice and choice in Omega prediction.
So, probably it makes me think more EDT than TDT?
It’s not about being “kind of agent” from my view, it’s just some strange crutch. It’s more thinking that there was some full system state, some code, which fully caused you to current state, and this code is executed in some other place, and because the system is fully deterministic, the results of execution will always match.
And then if you know that you’ve done something, you immediately can predict from that evidence that other copy have done the same, because if one copy of deterministic algorithm calculated some expression, you can generally predict that calculation of this expression always return answer X, so everyone who calculated it will get the same answer X. And if you know you calculated it and got Y, you know the answer is X=Y.
So in an expectation of the future where you pick one box, you can expect that your future version will immediately update into that Omega simulation in past also picked that and that caused Omega to put 1M. And vice versa, in an expectation of the future where you pick two boxes, you can update about Omega simulation choice and lack of 1M.
No time travel. Causes indeed go between your decision and past boxes choice. But they go from the past, your decision not only causes things, but WAS caused itself. And the same causes will make both your decision and box content.
I think it would be more obvious if question was starting from Omega visible to you beginning to calculate prediction, hiddenly making a setup and THEN giving you a choice. And not just “everything already happened, choose”.
There is a problem though with such thinking that when you tell people that they are fully determined, they start to think that their future is determined not through their ongoing thinking process, but independently from that.
I don’t know what to think except: but people who think they can determine the future evidently get better results then those who think that everything is doomed, so definitely you can determine the future by trying to do that, even you don’t know how to match that with being determined yourself.
Or maybe: if you now know about being determined and started to think “how me making make future decision usual way is determined by past”, it’s wrong, it’s more levels of putting calculations inside of calculations, so you have less power to spend on making good decision. And of course don’t think “how my current future is determined by my past including this thought”, it’s infinity loop.
Though in the first place I think about it like: if you have examples of previous games and one boxes get better results, then in two future predictions there is one with higher expected utility and it’s the one where you decide to pick one box, so decision of picking one box has higher expected utility, so do it.
Or if elaborate, choice of one box is an evidence in our examples for getting 1M. But if you will choose one box, you will not GET this evidence, you will CREATE it. And usually if you do have some evident measure and start to optimise it, it ceases to be evident.
But if we had examples of people who optimised measure and still get the results, then it’s still will be valid evidence even if you will optimise the measure. So prediction which starts from picking one box still has better expected utility and is the one you should choose to start by picking one box.
And I prefer to think about whole situation like our world is simulation, Omega copies the computer state from one point, continue before it can see your decision, and next execute copied state the second time, but now puts boxes conditional on your choice from previous run. And in sim 3 uses your choice from sim 2. And so on 1B times.
For me it’s obvious then that you should choose one box, because you can’t by running the same simulation state get different results on the moment where simulated you makes your choice.
(And when I was trying to think about choice which change the past, and determinism, free will and decisions, my brain felt like it’s trying to fold into a pretzel. And I make conclusion, that it’s better to never actually act on logic which makes your brain turn into a pretzel, even if it is “perfectly physically grounded”. If you have some “perfectly physically grounded logic” you need to unfold it before it become obvious, intuitive and fully visible, because elsewhise you’ll just end up making mistakes.)
A lot of free will confusions are sidestepped by framing decisions so that the agent thinks of itself as “I am an algorithm” rather than “I am a physical object”. This works well for bounded individual decisions (rather than for long stretches of activity in the world), and the things that happen in the physical world can then be thought of as instantiations of the algorithm and its resulting decision, which the algorithm controls from its abstract headquarters that are outside of physical worlds and physical time.
For example, this way you don’t control the past or the future, because the abstract algorithm is not located at some specific time, and all instances of it at various times within the physical world are related to the abstract algorithm in a similar way. For coordination of multiple possible worlds, an abstract algorithm is not anchored to a specific world, and so there is no additional conceptual strangeness of controlling one possible world from another, because in this framing you instead control both from the same algorithm that is not intrinsically part of either of them. There are also thought experiments where existence of an instance of the decision maker in some world depends on their own decision (so that for some possible decisions, the instance never existed in the first place), and extracting the decision making into an algorithm that’s unbothered by nonexistence of its instances in real worlds makes this more straightforward.
Somewhat similarly, one of the most useful shifts in identity to apply it to more cases usefully is to take the view that “My identity is an algorithm/function” as the more fundamental primitive/general case, and view the idea that “My identity is a physical object” is a useful special case, but the physicalist view of identity cannot hold in certain regimes.
The shift from a physical view to an algorithmic view of an identity answers/dissolves/sidesteps a lot of confusing questions about what happens to identity.
(It’s also possible that identity in a sense is basically a fiction, but that’s another question entirely)
I am suspicious to and don’t like using some weird sidesteppings, instead of not being confused while looking on the question from the position of “how it will actually look in the world/situation” (though they can be faster, yeah).
I mean, causes are real, future was caused by you, may be say controlled, and it less feels like controlling something if somebody predicts by wishes and performs them before I can think about their fulfilling.
But these are probably just trade-offs of trying to explain these things to people in plain English.
When I first thought that picking actions by conditional expected utility was obviously correct, I was very confused about the whole DTs situation. So link was very useful, thanks.