They can overgeneralize that feeling over all physical systems (like humans), missing out on the fact that this feeling should only be felt
I don’t follow why this is “overgeneralize” rather than just “generalize”. Are you saying it’s NOT TRUE for complex systems, or just that we can’t fit it in our heads? I can’t compute the Mandelbrot Set in my head, and I can’t measure initial conditions well enough to predict a multi-arm pendulum beyond a few seconds. But there’s no illusion of will for those things, just a simple acknowledgement of complexity.
The “will” is supposedly taken away by GLUT, which is possible to create and have a grasp of it for small systems, then people (wrongly) generalize this for all systems including themselves. I’m not claiming that any object that you can’t predict has a free will, I’m saying that having ruled out free will from a small system will not imply lack of free will in humans. I’m claiming “physicality ⇏ no free will” and “simplicity ⇒ no free will”, I’m not claiming “complexity ⇒ free will”.
Hmm. What about the claim “pysicality → no free will”. This is the more common assertion I see, and the one I find compelling.
The simplicity/complexity I often see attributed to “consciousness” (and I agree: complexity does not imply consciousness, but simplicity denies it), but that’s at least partly orthogonal to free will.
Consider the ASP problem, where the agent gets to decide whether it can be predicted, whether there is a dependence of the predictor on the agent. The agent can destroy the dependence by knowing too much about the predictor and making use of that knowledge. So this “knowing too much” (about the predictor) is what destroys the dependence, but it’s not just a consequence of the predictor being too simple, but rather of letting an understanding of predictor’s behavior precede agent’s behavior. It’s in the agent’s interest to not let this happen, to avoid making use of this knowledge (in an unfortunate way), to maintain the dependence (so that it gets to predictably one-box).
So here, when you are calling something simple as opposed to complicated, you are positing that its behavior is easy to understand, and so it’s easy to have something else make use of knowledge of that behavior. But even when it’s easy, it could be avoided intentionally. So even simple things can have free will (such as humans in the eyes of a superintelligence), from a point of view that decides to avoid knowing too much, which can be a good thing to do, and as the ASP problem illustrates can influence said behavior (the behavior could be different if not known, as the fact of not-being-known could happen to be easily knowable to the behavior).
I don’t follow why this is “overgeneralize” rather than just “generalize”. Are you saying it’s NOT TRUE for complex systems, or just that we can’t fit it in our heads? I can’t compute the Mandelbrot Set in my head, and I can’t measure initial conditions well enough to predict a multi-arm pendulum beyond a few seconds. But there’s no illusion of will for those things, just a simple acknowledgement of complexity.
The “will” is supposedly taken away by GLUT, which is possible to create and have a grasp of it for small systems, then people (wrongly) generalize this for all systems including themselves. I’m not claiming that any object that you can’t predict has a free will, I’m saying that having ruled out free will from a small system will not imply lack of free will in humans. I’m claiming “physicality ⇏ no free will” and “simplicity ⇒ no free will”, I’m not claiming “complexity ⇒ free will”.
Hmm. What about the claim “pysicality → no free will”. This is the more common assertion I see, and the one I find compelling.
The simplicity/complexity I often see attributed to “consciousness” (and I agree: complexity does not imply consciousness, but simplicity denies it), but that’s at least partly orthogonal to free will.
Consider the ASP problem, where the agent gets to decide whether it can be predicted, whether there is a dependence of the predictor on the agent. The agent can destroy the dependence by knowing too much about the predictor and making use of that knowledge. So this “knowing too much” (about the predictor) is what destroys the dependence, but it’s not just a consequence of the predictor being too simple, but rather of letting an understanding of predictor’s behavior precede agent’s behavior. It’s in the agent’s interest to not let this happen, to avoid making use of this knowledge (in an unfortunate way), to maintain the dependence (so that it gets to predictably one-box).
So here, when you are calling something simple as opposed to complicated, you are positing that its behavior is easy to understand, and so it’s easy to have something else make use of knowledge of that behavior. But even when it’s easy, it could be avoided intentionally. So even simple things can have free will (such as humans in the eyes of a superintelligence), from a point of view that decides to avoid knowing too much, which can be a good thing to do, and as the ASP problem illustrates can influence said behavior (the behavior could be different if not known, as the fact of not-being-known could happen to be easily knowable to the behavior).