A lot of free will confusions are sidestepped by framing decisions so that the agent thinks of itself as “I am an algorithm” rather than “I am a physical object”. This works well for bounded individual decisions (rather than for long stretches of activity in the world), and the things that happen in the physical world can then be thought of as instantiations of the algorithm and its resulting decision, which the algorithm controls from its abstract headquarters that are outside of physical worlds and physical time.
For example, this way you don’t control the past or the future, because the abstract algorithm is not located at some specific time, and all instances of it at various times within the physical world are related to the abstract algorithm in a similar way. For coordination of multiple possible worlds, an abstract algorithm is not anchored to a specific world, and so there is no additional conceptual strangeness of controlling one possible world from another, because in this framing you instead control both from the same algorithm that is not intrinsically part of either of them. There are also thought experiments where existence of an instance of the decision maker in some world depends on their own decision (so that for some possible decisions, the instance never existed in the first place), and extracting the decision making into an algorithm that’s unbothered by nonexistence of its instances in real worlds makes this more straightforward.
Somewhat similarly, one of the most useful shifts in identity to apply it to more cases usefully is to take the view that “My identity is an algorithm/function” as the more fundamental primitive/general case, and view the idea that “My identity is a physical object” is a useful special case, but the physicalist view of identity cannot hold in certain regimes.
The shift from a physical view to an algorithmic view of an identity answers/dissolves/sidesteps a lot of confusing questions about what happens to identity.
(It’s also possible that identity in a sense is basically a fiction, but that’s another question entirely)
I am suspicious to and don’t like using some weird sidesteppings, instead of not being confused while looking on the question from the position of “how it will actually look in the world/situation” (though they can be faster, yeah).
I mean, causes are real, future was caused by you, may be say controlled, and it less feels like controlling something if somebody predicts by wishes and performs them before I can think about their fulfilling.
But these are probably just trade-offs of trying to explain these things to people in plain English.
When I first thought that picking actions by conditional expected utility was obviously correct, I was very confused about the whole DTs situation. So link was very useful, thanks.
A lot of free will confusions are sidestepped by framing decisions so that the agent thinks of itself as “I am an algorithm” rather than “I am a physical object”. This works well for bounded individual decisions (rather than for long stretches of activity in the world), and the things that happen in the physical world can then be thought of as instantiations of the algorithm and its resulting decision, which the algorithm controls from its abstract headquarters that are outside of physical worlds and physical time.
For example, this way you don’t control the past or the future, because the abstract algorithm is not located at some specific time, and all instances of it at various times within the physical world are related to the abstract algorithm in a similar way. For coordination of multiple possible worlds, an abstract algorithm is not anchored to a specific world, and so there is no additional conceptual strangeness of controlling one possible world from another, because in this framing you instead control both from the same algorithm that is not intrinsically part of either of them. There are also thought experiments where existence of an instance of the decision maker in some world depends on their own decision (so that for some possible decisions, the instance never existed in the first place), and extracting the decision making into an algorithm that’s unbothered by nonexistence of its instances in real worlds makes this more straightforward.
Somewhat similarly, one of the most useful shifts in identity to apply it to more cases usefully is to take the view that “My identity is an algorithm/function” as the more fundamental primitive/general case, and view the idea that “My identity is a physical object” is a useful special case, but the physicalist view of identity cannot hold in certain regimes.
The shift from a physical view to an algorithmic view of an identity answers/dissolves/sidesteps a lot of confusing questions about what happens to identity.
(It’s also possible that identity in a sense is basically a fiction, but that’s another question entirely)
I am suspicious to and don’t like using some weird sidesteppings, instead of not being confused while looking on the question from the position of “how it will actually look in the world/situation” (though they can be faster, yeah).
I mean, causes are real, future was caused by you, may be say controlled, and it less feels like controlling something if somebody predicts by wishes and performs them before I can think about their fulfilling.
But these are probably just trade-offs of trying to explain these things to people in plain English.
When I first thought that picking actions by conditional expected utility was obviously correct, I was very confused about the whole DTs situation. So link was very useful, thanks.