The notion of control that makes sense to me is enacting a particular self-fulfilling belief out of a collection of available correct self-fulfilling beliefs. This is not correlation (in case of correlation one should look for a single belief in control of the correlated events), but the events controlled by such beliefs may well be instantiated by processes unrelated to the algorithm that determines which of the possible self-fulfilling beliefs gets enacted, that is unrelated to the algorithm that controls the events. The belief itself doesn’t have to be explicitly instantiated at all, it’s part of the algorithm’s abstract computation. The processes instantiating the events, and those channeling the algorithm, only have to be understood by the algorithm that discovers correct self-fulfilling beliefs and decides which one to enact, these processes don’t have to themselves be controlled by it, in fact them not being controlled makes for a better setup, this way the belief under consideration is more specific. (I don’t understand the preferred alternative you refer to with “free cause”.)
I typed too fast, and combined “free choice” and “uncaused action”. I don’t have a gears-level understanding of causality that admits of the BOTH predictability of decision, AND an algorithm that “decides which one to enact”. It seems to me that in order to be predictable, the decision has to be caused by some observable configuration BEFORE the prediction. That is, there is an upstream cause to the prediction and to the decision.
The notion of control that makes sense to me is enacting a particular self-fulfilling belief out of a collection of available correct self-fulfilling beliefs. This is not correlation (in case of correlation one should look for a single belief in control of the correlated events), but the events controlled by such beliefs may well be instantiated by processes unrelated to the algorithm that determines which of the possible self-fulfilling beliefs gets enacted, that is unrelated to the algorithm that controls the events. The belief itself doesn’t have to be explicitly instantiated at all, it’s part of the algorithm’s abstract computation. The processes instantiating the events, and those channeling the algorithm, only have to be understood by the algorithm that discovers correct self-fulfilling beliefs and decides which one to enact, these processes don’t have to themselves be controlled by it, in fact them not being controlled makes for a better setup, this way the belief under consideration is more specific. (I don’t understand the preferred alternative you refer to with “free cause”.)
I typed too fast, and combined “free choice” and “uncaused action”. I don’t have a gears-level understanding of causality that admits of the BOTH predictability of decision, AND an algorithm that “decides which one to enact”. It seems to me that in order to be predictable, the decision has to be caused by some observable configuration BEFORE the prediction. That is, there is an upstream cause to the prediction and to the decision.