By “tit for tat” I am referring to the notable strategy in the iterated prisoner’s dilemma. Agents using this strategy will keep cooperating as long as the other person cooperates, but if the other person defects then they will defect too. It’s an excellent strategy by many measures, beating out more complicated strategies, and we probably have something like it built into our heads.
By analogy, a “tit for tat” strategy in Newcomb’s problem with transparent boxes would be to one-box if the Predictor “cooperates,” and two-box if the Predictor “defects.”
But what does the Predictor see when it looks into the future of an agent with this strategy? Either way it chooses, it will have chosen correctly, so the Predictor needs some other, non-decision-determined criterion to decide.
Alternately you could think of it as making the decision-type of the agent undefined (at the time the Predictor is filling the boxes), thus making it impossible for the problem to have any well-defined decision-determined statement.
Just to clarify, I think your analysis here doesn’t apply to the transparent-boxes version that I presented in Good and Real. There, the predictor’s task is not necessarily to predict what the agent does for real, but rather to predict what the agent would do in the event that the agent sees $1M in the box. (That is, the predictor simulates
what—according to physics—the agent’s configuration would do, if presented with the $1M environment; or equivalently, what the agent’s ‘source code’ returns if called with the $1M argument.)
If the agent would one-box if $1M is in the box, but the predictor leaves the box empty, then the predictor has not predicted correctly, even if the agent (correctly) two-boxes upon seeing the empty box.
Interesting. This would seem to return it to the class of decision-determined problems, and for an illuminating reason—the algorithm is only run with one set of information—just like how in Newcomb’s problem the algorithm has only one set of information no matter the contents of the boxes.
This way of thinking makes Vladimir’s position more intuitive. To put words in his mouth, instead of being not decision determined, the “unfixed” version is merely two-decision determined, and then left undefined for half the bloody problem.
This demonstrates that an agent can’t know its own decision. In this case, predictor can’t know its own prediction, and so can’t know agent’s action, if that action allows to infer the prediction. (And this limitation can’t be fought with computational power, so Omega is as susceptable.) For predictors, it’s enough to have a fixpoint, to pick any self-fulfilling prediction. But if the environment is playing diagonal, as you describe, then the predictor can’t make a correct prediction.
This is not about failure of environment to be decision-determined, the environment you describe simply has the predictor lose for every decision.
(If you consider the question in enough detail, the distinction between the decision-determined problems and other kinds of problems doesn’t make sense, apart from highlighting that decision can be important apart from action-instance in the environment or other concepts, that all these are different concepts and decision makes sense abstractly, on its own.)
If the Predictor breaks sometimes, in a way dependent on the algorithm used, not on the decision made, then that’s not decision-determined. That’s decision-determined-unless-you-play-tit-for-tat, which doesn’t count at all.
I think the fact that it’s not decision-determined is fairly important, because that means it’s not necessarily a Newcomblike problem. Haven’t finished the manuscript yet, so I don’t know all the implications of that, but I have my suspicions.
If the Predictor breaks sometimes, in a way dependent on the algorithm used, not on the decision made, then that’s not decision-determined.
Yes, that’s the mantra. But how do you unpack “dependent” and “breaks”? Dependent with respect to what alternatives (and how to think of those alternatives)? More importantly, how can you decide that something dependent on one thing doesn’t depend on some other thing (while some uncertainty remains)?
As far as I can tell, all this dependence business has to be about resolution of logical uncertainty. You work with concepts, say A and B, that define the subject matter without giving you full understanding of their meaning, implications of the definitions. A depends on B when assuming an additional fact about B allows to infer something about A. By controlling B, you control A, and similarly if you find a C that controls B, you can control A through controlling C. All throughout, nothing is actually changed, the concepts are fixed.
If you know that A depends on B, and there’s also some C, then unless assuming full knowledge of B allows you to obtain full knowledge of A, you won’t be able to conclude that A is truly independent on C (screened off by B). If you are merely unable to see how knowing C can allow learning more about A, doesn’t prohibit the possibility of figuring out a way later, and that would mean that C controls A after all.
So we can talk about action-determined outcomes and decision-determined outcomes, where the concepts of an action, or of a decision, are in known dependence with an outcome. But arguing that the outcome doesn’t depend on given other concept is much more difficult, and more of impossible if you are dealing with sufficiently complicated uncertainty.
Decision-determined was used in the manuscript to mean completely determined (up to a probability distribution) by “decision-type,” and ditto action-determined was used to mean completely determined up to a probability distribution by actions in a causal way. So it’s simple to show that something isn’t decision-determined, in the sense used; you only need one exception, one case where it depends on the algorithm and not just the decision.
In my example the predictor wins for every situation, but basically yeah. You’re right that it could still be decision determined if we’re okay with having it break in some cases.
I’m not okay with having it break in some cases, though; the real world doesn’t return “undefined” very often. It’s possible to “save” it as non-pathological though not decision-determined, which can then be applied to the real world.
Predictor is not the agent that is making a decision. (Also, unpack the last sentence.)
By “tit for tat” I am referring to the notable strategy in the iterated prisoner’s dilemma. Agents using this strategy will keep cooperating as long as the other person cooperates, but if the other person defects then they will defect too. It’s an excellent strategy by many measures, beating out more complicated strategies, and we probably have something like it built into our heads.
By analogy, a “tit for tat” strategy in Newcomb’s problem with transparent boxes would be to one-box if the Predictor “cooperates,” and two-box if the Predictor “defects.”
But what does the Predictor see when it looks into the future of an agent with this strategy? Either way it chooses, it will have chosen correctly, so the Predictor needs some other, non-decision-determined criterion to decide.
Alternately you could think of it as making the decision-type of the agent undefined (at the time the Predictor is filling the boxes), thus making it impossible for the problem to have any well-defined decision-determined statement.
Just to clarify, I think your analysis here doesn’t apply to the transparent-boxes version that I presented in Good and Real. There, the predictor’s task is not necessarily to predict what the agent does for real, but rather to predict what the agent would do in the event that the agent sees $1M in the box. (That is, the predictor simulates what—according to physics—the agent’s configuration would do, if presented with the $1M environment; or equivalently, what the agent’s ‘source code’ returns if called with the $1M argument.)
If the agent would one-box if $1M is in the box, but the predictor leaves the box empty, then the predictor has not predicted correctly, even if the agent (correctly) two-boxes upon seeing the empty box.
Interesting. This would seem to return it to the class of decision-determined problems, and for an illuminating reason—the algorithm is only run with one set of information—just like how in Newcomb’s problem the algorithm has only one set of information no matter the contents of the boxes.
This way of thinking makes Vladimir’s position more intuitive. To put words in his mouth, instead of being not decision determined, the “unfixed” version is merely two-decision determined, and then left undefined for half the bloody problem.
That’s not essential, though (see the dual-simulation variant in Good and Real).
Well, yeah, so long as all the decisions have defined responses.
This demonstrates that an agent can’t know its own decision. In this case, predictor can’t know its own prediction, and so can’t know agent’s action, if that action allows to infer the prediction. (And this limitation can’t be fought with computational power, so Omega is as susceptable.) For predictors, it’s enough to have a fixpoint, to pick any self-fulfilling prediction. But if the environment is playing diagonal, as you describe, then the predictor can’t make a correct prediction.
This is not about failure of environment to be decision-determined, the environment you describe simply has the predictor lose for every decision.
(If you consider the question in enough detail, the distinction between the decision-determined problems and other kinds of problems doesn’t make sense, apart from highlighting that decision can be important apart from action-instance in the environment or other concepts, that all these are different concepts and decision makes sense abstractly, on its own.)
If the Predictor breaks sometimes, in a way dependent on the algorithm used, not on the decision made, then that’s not decision-determined. That’s decision-determined-unless-you-play-tit-for-tat, which doesn’t count at all.
I think the fact that it’s not decision-determined is fairly important, because that means it’s not necessarily a Newcomblike problem. Haven’t finished the manuscript yet, so I don’t know all the implications of that, but I have my suspicions.
Yes, that’s the mantra. But how do you unpack “dependent” and “breaks”? Dependent with respect to what alternatives (and how to think of those alternatives)? More importantly, how can you decide that something dependent on one thing doesn’t depend on some other thing (while some uncertainty remains)?
As far as I can tell, all this dependence business has to be about resolution of logical uncertainty. You work with concepts, say A and B, that define the subject matter without giving you full understanding of their meaning, implications of the definitions. A depends on B when assuming an additional fact about B allows to infer something about A. By controlling B, you control A, and similarly if you find a C that controls B, you can control A through controlling C. All throughout, nothing is actually changed, the concepts are fixed.
If you know that A depends on B, and there’s also some C, then unless assuming full knowledge of B allows you to obtain full knowledge of A, you won’t be able to conclude that A is truly independent on C (screened off by B). If you are merely unable to see how knowing C can allow learning more about A, doesn’t prohibit the possibility of figuring out a way later, and that would mean that C controls A after all.
So we can talk about action-determined outcomes and decision-determined outcomes, where the concepts of an action, or of a decision, are in known dependence with an outcome. But arguing that the outcome doesn’t depend on given other concept is much more difficult, and more of impossible if you are dealing with sufficiently complicated uncertainty.
Decision-determined was used in the manuscript to mean completely determined (up to a probability distribution) by “decision-type,” and ditto action-determined was used to mean completely determined up to a probability distribution by actions in a causal way. So it’s simple to show that something isn’t decision-determined, in the sense used; you only need one exception, one case where it depends on the algorithm and not just the decision.
In my example the predictor wins for every situation, but basically yeah. You’re right that it could still be decision determined if we’re okay with having it break in some cases.
I’m not okay with having it break in some cases, though; the real world doesn’t return “undefined” very often. It’s possible to “save” it as non-pathological though not decision-determined, which can then be applied to the real world.