“Why do I think I have free will?” There seem to be two categories of things out there in the world: things whose behavior is easily modeled and thus predictable; and things whose internal structure is opaque (to pre-scientific people) and are best predicted by taking an “intensional stance” (beliefs, desires, goals, etc.). So I build a bridge, and put a weight on it, and wonder whether the bridge will fall down. It’s pretty clearly the case that there’s some limit of weight, and if I’m below that weight—whether I use feathers or rocks—the bridge will stay up; otherwise it will collapse. Very simple model, reasonably accurate.
In contrast, if I ask my officemate to borrow his pen, he may or may not give it to me. Trying to predict whether he will is impossible to do precisely, but responds best (for laypeople) to a model with beliefs, goals, memories, etc. Maybe he’s usually helpful, and so will give me the pen. Maybe I made fun of his shirt color yesterday, and he remembers, and is angry with me, and so won’t.
This “intensional stance” model requires some homunculus in there to “make a decision”. It can decide to take whatever action it wants. I can’t make it do anything (in constrast to a bridge, which doesn’t “want” anything, and responds to my desires).
This is the theory element that gets labeled as “free will”. It’s that intensional actors appears to be able to do any action that they “want” or “decide” to do. That’s part of the theory of predicting their future actions.
So, why do humans have free will but computers don’t? Because most computers have behavior that is far easier to understand than human behavior, and no predictive value is gained by adopting the intensional stance towards them.
I’ll give (a few of them) a shot.
“Why do I think I have free will?” There seem to be two categories of things out there in the world: things whose behavior is easily modeled and thus predictable; and things whose internal structure is opaque (to pre-scientific people) and are best predicted by taking an “intensional stance” (beliefs, desires, goals, etc.). So I build a bridge, and put a weight on it, and wonder whether the bridge will fall down. It’s pretty clearly the case that there’s some limit of weight, and if I’m below that weight—whether I use feathers or rocks—the bridge will stay up; otherwise it will collapse. Very simple model, reasonably accurate.
In contrast, if I ask my officemate to borrow his pen, he may or may not give it to me. Trying to predict whether he will is impossible to do precisely, but responds best (for laypeople) to a model with beliefs, goals, memories, etc. Maybe he’s usually helpful, and so will give me the pen. Maybe I made fun of his shirt color yesterday, and he remembers, and is angry with me, and so won’t.
This “intensional stance” model requires some homunculus in there to “make a decision”. It can decide to take whatever action it wants. I can’t make it do anything (in constrast to a bridge, which doesn’t “want” anything, and responds to my desires).
This is the theory element that gets labeled as “free will”. It’s that intensional actors appears to be able to do any action that they “want” or “decide” to do. That’s part of the theory of predicting their future actions.
So, why do humans have free will but computers don’t? Because most computers have behavior that is far easier to understand than human behavior, and no predictive value is gained by adopting the intensional stance towards them.