I am suspicious to and don’t like using some weird sidesteppings, instead of not being confused while looking on the question from the position of “how it will actually look in the world/situation” (though they can be faster, yeah).
I mean, causes are real, future was caused by you, may be say controlled, and it less feels like controlling something if somebody predicts by wishes and performs them before I can think about their fulfilling.
But these are probably just trade-offs of trying to explain these things to people in plain English.
When I first thought that picking actions by conditional expected utility was obviously correct, I was very confused about the whole DTs situation. So link was very useful, thanks.
I am suspicious to and don’t like using some weird sidesteppings, instead of not being confused while looking on the question from the position of “how it will actually look in the world/situation” (though they can be faster, yeah).
I mean, causes are real, future was caused by you, may be say controlled, and it less feels like controlling something if somebody predicts by wishes and performs them before I can think about their fulfilling.
But these are probably just trade-offs of trying to explain these things to people in plain English.
When I first thought that picking actions by conditional expected utility was obviously correct, I was very confused about the whole DTs situation. So link was very useful, thanks.