Why learning about determinism leads to confusion about free will?
When someone is doing physics (tries to find out what happens with a physical system knowing it initial conditions), they are performing the transformation from the time-consuming-but-easy-to-express form of connecting the initial conditions to the end result (physical laws), to a form of a single entry in the giant look-up table which matches initial conditions to the end result (which is not-time-consuming-but-harder-to-express form), essentially flattening out the time dimension. That creates a feeling that the process that they are analyzing is pre-determined, that this giant look-up table already exists. And when they apply it to themselves, this can create a feeling of no control over their own actions, like those observation-action pairs are drawn from that pre-existing table. But this table doesn’t actually exist; they still need to perform the computation to get to the action; there is no way around it. Wherever the process is performed, that process is the person.
In other words, when people do physics on simple enough systems that they can fit in their head both the initial conditions and the end result and the connection between them, they feel a sense of “machineness” about those systems. They can overgeneralize that feeling over all physical systems (like humans), missing out on the fact that this feeling should only be felt when they actually can fit the model of the system (and initial conditions/end results entries) in their head, which they don’t in the case of humans.
They can overgeneralize that feeling over all physical systems (like humans), missing out on the fact that this feeling should only be felt
I don’t follow why this is “overgeneralize” rather than just “generalize”. Are you saying it’s NOT TRUE for complex systems, or just that we can’t fit it in our heads? I can’t compute the Mandelbrot Set in my head, and I can’t measure initial conditions well enough to predict a multi-arm pendulum beyond a few seconds. But there’s no illusion of will for those things, just a simple acknowledgement of complexity.
The “will” is supposedly taken away by GLUT, which is possible to create and have a grasp of it for small systems, then people (wrongly) generalize this for all systems including themselves. I’m not claiming that any object that you can’t predict has a free will, I’m saying that having ruled out free will from a small system will not imply lack of free will in humans. I’m claiming “physicality ⇏ no free will” and “simplicity ⇒ no free will”, I’m not claiming “complexity ⇒ free will”.
Hmm. What about the claim “pysicality → no free will”. This is the more common assertion I see, and the one I find compelling.
The simplicity/complexity I often see attributed to “consciousness” (and I agree: complexity does not imply consciousness, but simplicity denies it), but that’s at least partly orthogonal to free will.
Consider the ASP problem, where the agent gets to decide whether it can be predicted, whether there is a dependence of the predictor on the agent. The agent can destroy the dependence by knowing too much about the predictor and making use of that knowledge. So this “knowing too much” (about the predictor) is what destroys the dependence, but it’s not just a consequence of the predictor being too simple, but rather of letting an understanding of predictor’s behavior precede agent’s behavior. It’s in the agent’s interest to not let this happen, to avoid making use of this knowledge (in an unfortunate way), to maintain the dependence (so that it gets to predictably one-box).
So here, when you are calling something simple as opposed to complicated, you are positing that its behavior is easy to understand, and so it’s easy to have something else make use of knowledge of that behavior. But even when it’s easy, it could be avoided intentionally. So even simple things can have free will (such as humans in the eyes of a superintelligence), from a point of view that decides to avoid knowing too much, which can be a good thing to do, and as the ASP problem illustrates can influence said behavior (the behavior could be different if not known, as the fact of not-being-known could happen to be easily knowable to the behavior).
I’d say this is correct, but it’s also deeply counterintuitive. We don’t feel like we are just a process performing itself, or at least that’s way too abstract to wrap our heads around. The intuitive notion of free will is IMO something like the following:
had I been placed ten times in exactly the same circumstances, with exactly the same input conditions, I could theoretically have come up with different courses of action in response to them, even though one of them may make a lot more sense for me, based on some kind of ineffable non-deterministic quality that however isn’t random either, but it’s the manifestation of a self that exists somehow untethered from the laws of causality
Of course not exactly worded that way in most people’s minds, but I think that’s really the intuition that clashes against pure determinism. It’s a materialistic viewpoint, and lots of people are consciously or not dualists—implicitly assuming there’s one special set of rules that applies to the self/mind/soul that doesn’t apply to everything else.
Some confusion remains appropriate, because for example there is still no satisfactory account of a sense in which the behavior of one program influences the behavior of another program (in the general case, without constructing these programs in particular ways), with neither necessarily occurring within the other at the level of syntax. In this situation, the first program could be said to control the second (especially if it understands what’s happening to it), or the second program could be said to perform analysis of (reason about) the first.
Just Turing machines / lambda terms, or something like that. And “behavior” is however you need to define it to make a sensible account of the dependence between “behaviors”, or of how one of the “behaviors” produces a static analysis of the other. The intent is to capture a key building block of acausal consequentialism in a computational setting, which is one way of going about formulating free will in a deterministic world.
(You don’t just control the physical world through your physical occurrence in it, but also for example through the way other people are reasoning about your possible behaviors, and so an account that simply looks for your occurrence in the world as a subterm/part misses an important aspect of what’s going on. As Turing machines also illustrate, not having subterm/part structure.)
Why learning about determinism leads to confusion about free will?
When someone is doing physics (tries to find out what happens with a physical system knowing it initial conditions), they are performing the transformation from the time-consuming-but-easy-to-express form of connecting the initial conditions to the end result (physical laws), to a form of a single entry in the giant look-up table which matches initial conditions to the end result (which is not-time-consuming-but-harder-to-express form), essentially flattening out the time dimension. That creates a feeling that the process that they are analyzing is pre-determined, that this giant look-up table already exists. And when they apply it to themselves, this can create a feeling of no control over their own actions, like those observation-action pairs are drawn from that pre-existing table. But this table doesn’t actually exist; they still need to perform the computation to get to the action; there is no way around it. Wherever the process is performed, that process is the person.
In other words, when people do physics on simple enough systems that they can fit in their head both the initial conditions and the end result and the connection between them, they feel a sense of “machineness” about those systems. They can overgeneralize that feeling over all physical systems (like humans), missing out on the fact that this feeling should only be felt when they actually can fit the model of the system (and initial conditions/end results entries) in their head, which they don’t in the case of humans.
I don’t follow why this is “overgeneralize” rather than just “generalize”. Are you saying it’s NOT TRUE for complex systems, or just that we can’t fit it in our heads? I can’t compute the Mandelbrot Set in my head, and I can’t measure initial conditions well enough to predict a multi-arm pendulum beyond a few seconds. But there’s no illusion of will for those things, just a simple acknowledgement of complexity.
The “will” is supposedly taken away by GLUT, which is possible to create and have a grasp of it for small systems, then people (wrongly) generalize this for all systems including themselves. I’m not claiming that any object that you can’t predict has a free will, I’m saying that having ruled out free will from a small system will not imply lack of free will in humans. I’m claiming “physicality ⇏ no free will” and “simplicity ⇒ no free will”, I’m not claiming “complexity ⇒ free will”.
Hmm. What about the claim “pysicality → no free will”. This is the more common assertion I see, and the one I find compelling.
The simplicity/complexity I often see attributed to “consciousness” (and I agree: complexity does not imply consciousness, but simplicity denies it), but that’s at least partly orthogonal to free will.
Consider the ASP problem, where the agent gets to decide whether it can be predicted, whether there is a dependence of the predictor on the agent. The agent can destroy the dependence by knowing too much about the predictor and making use of that knowledge. So this “knowing too much” (about the predictor) is what destroys the dependence, but it’s not just a consequence of the predictor being too simple, but rather of letting an understanding of predictor’s behavior precede agent’s behavior. It’s in the agent’s interest to not let this happen, to avoid making use of this knowledge (in an unfortunate way), to maintain the dependence (so that it gets to predictably one-box).
So here, when you are calling something simple as opposed to complicated, you are positing that its behavior is easy to understand, and so it’s easy to have something else make use of knowledge of that behavior. But even when it’s easy, it could be avoided intentionally. So even simple things can have free will (such as humans in the eyes of a superintelligence), from a point of view that decides to avoid knowing too much, which can be a good thing to do, and as the ASP problem illustrates can influence said behavior (the behavior could be different if not known, as the fact of not-being-known could happen to be easily knowable to the behavior).
I’d say this is correct, but it’s also deeply counterintuitive. We don’t feel like we are just a process performing itself, or at least that’s way too abstract to wrap our heads around. The intuitive notion of free will is IMO something like the following:
had I been placed ten times in exactly the same circumstances, with exactly the same input conditions, I could theoretically have come up with different courses of action in response to them, even though one of them may make a lot more sense for me, based on some kind of ineffable non-deterministic quality that however isn’t random either, but it’s the manifestation of a self that exists somehow untethered from the laws of causality
Of course not exactly worded that way in most people’s minds, but I think that’s really the intuition that clashes against pure determinism. It’s a materialistic viewpoint, and lots of people are consciously or not dualists—implicitly assuming there’s one special set of rules that applies to the self/mind/soul that doesn’t apply to everything else.
Some confusion remains appropriate, because for example there is still no satisfactory account of a sense in which the behavior of one program influences the behavior of another program (in the general case, without constructing these programs in particular ways), with neither necessarily occurring within the other at the level of syntax. In this situation, the first program could be said to control the second (especially if it understands what’s happening to it), or the second program could be said to perform analysis of (reason about) the first.
What do you mean by programs here?
Just Turing machines / lambda terms, or something like that. And “behavior” is however you need to define it to make a sensible account of the dependence between “behaviors”, or of how one of the “behaviors” produces a static analysis of the other. The intent is to capture a key building block of acausal consequentialism in a computational setting, which is one way of going about formulating free will in a deterministic world.
(You don’t just control the physical world through your physical occurrence in it, but also for example through the way other people are reasoning about your possible behaviors, and so an account that simply looks for your occurrence in the world as a subterm/part misses an important aspect of what’s going on. As Turing machines also illustrate, not having subterm/part structure.)