it’s a question of where the information is combined, and what options the system considered. yes, the chess engine does have a specific slice of free will about its actions; but clearly, it doesn’t have meta free will about what kind of actions to take. it is relatively quite deterministic in decisionmaking; we have lots of incremental steps of noise on a saddle point.
saddle points are edge of chaos; balancing on them is a delicate act that cells walk at all times by staying alive, so in this sense, life almost always has at least a little bit of free will at the cellular scale. When neurons communicate at scale successfully, they are able to form large networks that represent the shape of the outside world in great detail, and then balance on the decision and make it incrementally, thereby integrating large amounts of information. That means that free will is the chaos resolution process, and is able to integrate large amounts of information while still retaining the self-trajectory.
deterministic or not, it’s highly chaotic—even if the RNG is pseudorandom instead of random, it looks random from where we sit, many levels of nesting larger than it. Because of that, even if it is in some sense pseudorandomly predetermined, it is not known to us, and so we have the hyperreal decisionmaking process of writing some portion of the future; as far as we are concerned, the future is logically underdetermined until we decide what it is via our information integration into prediction and decisionmaking about how to diffuse away some paths through the decisionmaking.
This is a narrow form of the sparse multiuniverse hypothesis: we are bubbles in a potentially mostly-dense multiverse, but within the bubble where we continue to exist, we decide what is most probable by denoising towards it. when someone has a brain injury of those kinds, they can lose some capability to combine information because some paths through the network of brain cells are not maintaining the same various parts of being a hybrid model based reinforcement learner as well. but they retain free will within what they’re able to model.
in the case of the brain tumor case, I would say that the brain tumor causing its output behaviors to defect against other life clearly came from the brain tumor. however, I don’t like the penalty example; my view is that the decision to murder made in clear mind should also be considered an unwellness that may be requested to be self modified away, just like cancers that make one a danger to others. And I would say that in both the case of cancer and of “mere” decision, the question should be one of decisionmaking after the fact: will you allow society to request that you prove you’ve self modified away from the decisionmaking matter-process that caused you to do this last time?
it’s a question of where the information is combined, and what options the system considered. yes, the chess engine does have a specific slice of free will about its actions; but clearly, it doesn’t have meta free will about what kind of actions to take. it is relatively quite deterministic in decisionmaking; we have lots of incremental steps of noise on a saddle point.
saddle points are edge of chaos; balancing on them is a delicate act that cells walk at all times by staying alive, so in this sense, life almost always has at least a little bit of free will at the cellular scale. When neurons communicate at scale successfully, they are able to form large networks that represent the shape of the outside world in great detail, and then balance on the decision and make it incrementally, thereby integrating large amounts of information. That means that free will is the chaos resolution process, and is able to integrate large amounts of information while still retaining the self-trajectory.
deterministic or not, it’s highly chaotic—even if the RNG is pseudorandom instead of random, it looks random from where we sit, many levels of nesting larger than it. Because of that, even if it is in some sense pseudorandomly predetermined, it is not known to us, and so we have the hyperreal decisionmaking process of writing some portion of the future; as far as we are concerned, the future is logically underdetermined until we decide what it is via our information integration into prediction and decisionmaking about how to diffuse away some paths through the decisionmaking.
This is a narrow form of the sparse multiuniverse hypothesis: we are bubbles in a potentially mostly-dense multiverse, but within the bubble where we continue to exist, we decide what is most probable by denoising towards it. when someone has a brain injury of those kinds, they can lose some capability to combine information because some paths through the network of brain cells are not maintaining the same various parts of being a hybrid model based reinforcement learner as well. but they retain free will within what they’re able to model.
in the case of the brain tumor case, I would say that the brain tumor causing its output behaviors to defect against other life clearly came from the brain tumor. however, I don’t like the penalty example; my view is that the decision to murder made in clear mind should also be considered an unwellness that may be requested to be self modified away, just like cancers that make one a danger to others. And I would say that in both the case of cancer and of “mere” decision, the question should be one of decisionmaking after the fact: will you allow society to request that you prove you’ve self modified away from the decisionmaking matter-process that caused you to do this last time?