This demonstrates that an agent can’t know its own decision. In this case, predictor can’t know its own prediction, and so can’t know agent’s action, if that action allows to infer the prediction. (And this limitation can’t be fought with computational power, so Omega is as susceptable.) For predictors, it’s enough to have a fixpoint, to pick any self-fulfilling prediction. But if the environment is playing diagonal, as you describe, then the predictor can’t make a correct prediction.
This is not about failure of environment to be decision-determined, the environment you describe simply has the predictor lose for every decision.
(If you consider the question in enough detail, the distinction between the decision-determined problems and other kinds of problems doesn’t make sense, apart from highlighting that decision can be important apart from action-instance in the environment or other concepts, that all these are different concepts and decision makes sense abstractly, on its own.)
If the Predictor breaks sometimes, in a way dependent on the algorithm used, not on the decision made, then that’s not decision-determined. That’s decision-determined-unless-you-play-tit-for-tat, which doesn’t count at all.
I think the fact that it’s not decision-determined is fairly important, because that means it’s not necessarily a Newcomblike problem. Haven’t finished the manuscript yet, so I don’t know all the implications of that, but I have my suspicions.
If the Predictor breaks sometimes, in a way dependent on the algorithm used, not on the decision made, then that’s not decision-determined.
Yes, that’s the mantra. But how do you unpack “dependent” and “breaks”? Dependent with respect to what alternatives (and how to think of those alternatives)? More importantly, how can you decide that something dependent on one thing doesn’t depend on some other thing (while some uncertainty remains)?
As far as I can tell, all this dependence business has to be about resolution of logical uncertainty. You work with concepts, say A and B, that define the subject matter without giving you full understanding of their meaning, implications of the definitions. A depends on B when assuming an additional fact about B allows to infer something about A. By controlling B, you control A, and similarly if you find a C that controls B, you can control A through controlling C. All throughout, nothing is actually changed, the concepts are fixed.
If you know that A depends on B, and there’s also some C, then unless assuming full knowledge of B allows you to obtain full knowledge of A, you won’t be able to conclude that A is truly independent on C (screened off by B). If you are merely unable to see how knowing C can allow learning more about A, doesn’t prohibit the possibility of figuring out a way later, and that would mean that C controls A after all.
So we can talk about action-determined outcomes and decision-determined outcomes, where the concepts of an action, or of a decision, are in known dependence with an outcome. But arguing that the outcome doesn’t depend on given other concept is much more difficult, and more of impossible if you are dealing with sufficiently complicated uncertainty.
Decision-determined was used in the manuscript to mean completely determined (up to a probability distribution) by “decision-type,” and ditto action-determined was used to mean completely determined up to a probability distribution by actions in a causal way. So it’s simple to show that something isn’t decision-determined, in the sense used; you only need one exception, one case where it depends on the algorithm and not just the decision.
In my example the predictor wins for every situation, but basically yeah. You’re right that it could still be decision determined if we’re okay with having it break in some cases.
I’m not okay with having it break in some cases, though; the real world doesn’t return “undefined” very often. It’s possible to “save” it as non-pathological though not decision-determined, which can then be applied to the real world.
This demonstrates that an agent can’t know its own decision. In this case, predictor can’t know its own prediction, and so can’t know agent’s action, if that action allows to infer the prediction. (And this limitation can’t be fought with computational power, so Omega is as susceptable.) For predictors, it’s enough to have a fixpoint, to pick any self-fulfilling prediction. But if the environment is playing diagonal, as you describe, then the predictor can’t make a correct prediction.
This is not about failure of environment to be decision-determined, the environment you describe simply has the predictor lose for every decision.
(If you consider the question in enough detail, the distinction between the decision-determined problems and other kinds of problems doesn’t make sense, apart from highlighting that decision can be important apart from action-instance in the environment or other concepts, that all these are different concepts and decision makes sense abstractly, on its own.)
If the Predictor breaks sometimes, in a way dependent on the algorithm used, not on the decision made, then that’s not decision-determined. That’s decision-determined-unless-you-play-tit-for-tat, which doesn’t count at all.
I think the fact that it’s not decision-determined is fairly important, because that means it’s not necessarily a Newcomblike problem. Haven’t finished the manuscript yet, so I don’t know all the implications of that, but I have my suspicions.
Yes, that’s the mantra. But how do you unpack “dependent” and “breaks”? Dependent with respect to what alternatives (and how to think of those alternatives)? More importantly, how can you decide that something dependent on one thing doesn’t depend on some other thing (while some uncertainty remains)?
As far as I can tell, all this dependence business has to be about resolution of logical uncertainty. You work with concepts, say A and B, that define the subject matter without giving you full understanding of their meaning, implications of the definitions. A depends on B when assuming an additional fact about B allows to infer something about A. By controlling B, you control A, and similarly if you find a C that controls B, you can control A through controlling C. All throughout, nothing is actually changed, the concepts are fixed.
If you know that A depends on B, and there’s also some C, then unless assuming full knowledge of B allows you to obtain full knowledge of A, you won’t be able to conclude that A is truly independent on C (screened off by B). If you are merely unable to see how knowing C can allow learning more about A, doesn’t prohibit the possibility of figuring out a way later, and that would mean that C controls A after all.
So we can talk about action-determined outcomes and decision-determined outcomes, where the concepts of an action, or of a decision, are in known dependence with an outcome. But arguing that the outcome doesn’t depend on given other concept is much more difficult, and more of impossible if you are dealing with sufficiently complicated uncertainty.
Decision-determined was used in the manuscript to mean completely determined (up to a probability distribution) by “decision-type,” and ditto action-determined was used to mean completely determined up to a probability distribution by actions in a causal way. So it’s simple to show that something isn’t decision-determined, in the sense used; you only need one exception, one case where it depends on the algorithm and not just the decision.
In my example the predictor wins for every situation, but basically yeah. You’re right that it could still be decision determined if we’re okay with having it break in some cases.
I’m not okay with having it break in some cases, though; the real world doesn’t return “undefined” very often. It’s possible to “save” it as non-pathological though not decision-determined, which can then be applied to the real world.