I find this and the smoker’s lesion to have the same flaw, namely: it does not make sense to me to both suppose that the agent is using EDT, and suppose some biases in the agent’s decision-making. We can perhaps suppose that (in both cases) the agent’s preferences are what is affected (by the genes, or by the physics). But then, shouldn’t the agent be able to observe this (the “tickle defense”), at least indirectly through behavior? And won’t this make it act as CDT would act?
But: I find the blackmail letter to be a totally compelling case against EDT.
I agree with all of this, and I can’t understand why the Smoking Lesion is still seen as the standard counterexample to EDT.
Regarding the blackmail letter: I think that in principle, it should be possible to use a version of EDT that also chooses policies based on a prior instead of actions based on your current probability distribution. That would be “updateless EDT”, and I think it wouldn’t give in to Evidential Blackmail. So I think rather than an argument against EDT, it’s an argument in favor of updatelessness.
I think the fact that there was near universal agreement on LW that one-boxing is the right thing to do in Newcomb, and smoking in the Smoking Lesion, while Eliezer was the one who proposed this combination, together with the fact that this combination is an unusual one, is pretty good evidence that the desire to agree with Eliezer was involved.
Of course, it could be that his arguments were just very convincing, but given human nature that seems less likely.
I think that Newcomb and the Smoking Lesion are logically equivalent, and that you should therefore either both take one-box and not smoke, or take two boxes and smoke. I choose one-boxing and not smoking. The apparent logical equivalence here is more evidence that some non-logical desire (such as wanting to agree with Eliezer) is largely responsible. Of course the people who hold that position will say that they do not see this evidence (i.e. that there is no equivalence), but this is common in arguments that involve people’s motives in this way.
Well, Gary Drescher’s Good and Real endorses one-boxing and smoking (the latter, admittedly, in passing and in a footnote) and I’m pretty sure he didn’t get his ideas from Eliezer.
I think that Newcomb and the Smoking Lesion are logically equivalent
And the people you are talking about do not. Perhaps you would care to demonstrate the logical equivalence of the two scenarios? (Without, of course, appealing to any principles that make one or the other impossible.)
Of course, it could be that his arguments were just very convincing, but given human nature that seems less likely.
Let me summarize the situation here. Someone makes a series of arguments purporting to show that two possibly surprising things X and Y are both correct. A bunch of people, selected largely on the basis of having read these arguments and others that accompany them, mostly think that both X and Y are correct. You conclude that these people are motivated by wanting to agree with the guy who made the arguments, and dismiss the possibility that the arguments might be good ones, or at least convincing ones, out of hand on general principles.
Well, obviously you could be right. But I can’t say you’ve made a very compelling case.
(I offer the following alternative hypotheses, and remark that you have offered nothing resembling evidence against any of them. 1. Eliezer’s arguments for one-boxing and smoking are in fact very good arguments, and most people smart enough to understand them agree once they’ve read them. 2. This stuff is hard to think about and smart people who think hard about it don’t all come to the same conclusion; certain approaches to thinking lead naturally to finding Eliezer’s arguments convincing; the LW community consists of people who have that kind of brain, and for them even if not for everyone Eliezer’s arguments are convincing once read. 3. Eliezer’s arguments are very poor arguments, but anyone able to see how bad they are is likely to give up on LW having seen them, so the remaining LW community is made up of people too stupid to see their flaws.)
In any case, I wonder whether I misinterpreted what you meant by “seen as the standard counterexample”. Saying that A is seen as the standard counterexample to B means that (1) it’s seen as a counterexample and (2) other purported counterexamples are not preferred. I took you to be saying that #2 was the result of wanting to agree with Eliezer, but now it looks as if it was #1 you were referring to. That’s odd since the context was someone’s suggestion that something else was a better counterexample.
I don’t see any good reason to think that that question does have an answer that doesn’t depend on how those various kinds of causation operate. It’s also not clear to me exactly what X is supposed to be in the Newcomb case. The entire state of my brain and environment prior to my being presented with the two boxes, or something?
I think the question presents the information which Eliezer in fact used to conclude that he should take only one box in Newcomb, and if that is true, he should also have concluded that he should not smoke. It is true that if someone responds, “not enough info,” then he can say that Newcomb and the Smoking lesion differ. But in that case, what additional information are you asking for? You haven’t suggested any way to specify the additional info which supposedly would tell you whether to choose A or B. I would just say that I know without any additional info that I should choose B, because that I way I will get B and D, which I prefer to A and C. What extra factor is needed to get you to choose A, and what to choose B?
Yes, in the Newcomb case X will be the pre-existing state of your brain and environment, since this is the cause both of Omega’s decision whether to place the million in the box and of your choice.
I am not proposing a specific algorithm, so I don’t think I can give a very specific answer to that question. But I can say that my intuitions about whether to prefer A+C or B+D depend on what sort of thing X is and how it causes me to make the choices I do; things that seem particularly relevant here are that in the Newcomb case (1) X, as you’ve defined it, includes (or at least implies) all the details of my decision-making process and (2) the way in which X causes C or D explicitly involves my choosing A or B, whereas in the smoking lesion case neither of those is anything like true.
I think #2 is why Eliezer’s own pet decision theory gives different answers in the Newcomb and Smoking cases. If the two are, really, logically equivalent then you should be able to demonstrate an outright inconsistency in Eliezer’s “TDT”. Can you do that?
I have not seen a valid derivation of smoking from Eliezer’s TDT, so I am not saying that TDT is inconsistent. I suspect that TDT actually implies not smoking. The point of the generalization is that any decision theory that answers the question will say both [one-box, don’t smoke] or [two-box, smoke]. Causal decision theory does answer the question: it says that you aren’t responsible for C or D, and you prefer A to B, so do A. And non-causal decision theories in general will say do B, because you prefer B & D to A & C; I think this is probably true of TDT as well.
I agree with you that the reason some people want different answers is because of ideas about the causality there. When we previously had these discussions, people constantly said things like, “if the lesion has a 100% correlation, then you can’t make a choice anyway,” and things like that, which is an intuition about the causality. But obviously that is not true except in the sense that if Omega has a 100% correlation and makes a decision, you can longer make a choice in Newcomb either. In fact, I think that any time you add something relevant to the decision to the general case I presented, you can construct something parallel for the Smoking Lesion and for Newcomb.
What is included in the Newcomb case might depend on the particular case: if someone is absolutely determined to one-box no matter what the circumstances, then the state of his brain alone might be X. And this is really no different from the lesion, since if we are to imagine the lesion case working in real life, we need to include the relationship between the physical lesion and the rest of the brain. So the state of the brain might be sufficient for both, at least in some cases.
“The way in which X causes C or D explicitly involves my choosing A or B, whereas in the smoking lesion case neither of those is anything like true.” It does matter how the lesion gets correlated with smoking, just as it matters how Newcomb’s prediction gets correlated with one or two-boxing. This is why I prefer to discuss the case of 100% correlation first: because in this case, they have to be correlated in the right way in both cases.
Suppose there is some correlation but it is not 100%. There will be parallel cases for Smoking Lesion and for Newcomb where the correlation is not the right kind:
Smoking Lesion. Suppose the lesion is correlated with smoking only by causing you to have a desire for smoking. Then someone can say, “I have a strong desire to smoke. That means I probably have the lesion. But if I smoke, it doesn’t mean my desire is any stronger, since I already have that desire; so I might as well smoke.” Note that even evidential decision theory recommends smoking here, at least given that you can directly take note of the condition of your desire; if you can validly argue, “if I actually smoke, that suggests my desire was a bit stronger than I realized, so I will also be more likely to have the lesion,” that may change the situation (I’ll discuss this below).
Necomb. Suppose Omega’s prediction is correlated with one-boxing only by taking note of previous statements a person has made and determining whether most of them support one-boxing or two-boxing. Then someone can say, “Most of my statements in the past have supported one-boxing. So the million is probably in the box. So I might as well take both boxes. I will probably still get the million, since this will not affect the past statements that Omega is judging from.” Even evidential decision theory will recommend this course of action, and I think even Eliezer would agree that if we know for a fact that Omega is judging in this way, and we directly know the condition of our past statements, two-boxing is appropriate. But again, it is different if one can validly argue, “if I take two boxes now, that will likely mean my promotion of one-boxing in the past wasn’t quite as strong as I thought, so the million will be less likely to be there.” This kind of uncertainty may again change the situation in the same way as uncertainty about my desire above.
Suppose the correlation is not 100%, but we have one of the conditional situations mentioned above: where if I do A, I actually do increase my expectation of C, and if I do B, I actually do increase my expectation of D. This is the right kind of correlation. And in this case, evidential decision theory recommends doing B in both cases, and I think the reasons are parallel for Newcomb and for Smoking lesion. [Edit: obviously if the correlation is not 100% it will depend on the particular correlation and on concrete utilities; I ignored this for simplicity.]
But let’s consider another case where the correlation isn’t the right kind. Again, the lesion causes smoking by causing desire. And I am uncertain of exactly how strong my desire is, but I know I have some desire. Then it would appear at first that evidential decision theory recommends not smoking. But the situation will be changed if I can validly argue, “I am going to decide using some rigid decision theory that always recommends the same course of action in this situation. And this decision theory recommends smoking. This will imply in no way that my desire was any stronger, since it wasn’t the strength of the desire that led to it, but this rigid decision theory.” In that case, choosing to smoke will not increase your expectation that you have the lesion. And therefore even evidential decision theory will recommend smoking in this case.
Now it might seem difficult to construct a parallel for Newcomb here, and this is getting at what appears different to you: if someone says, “I am going to use a decision theory which rigidly recommends two-boxing,” that will suggest e.g. that even his previous statements promoting one-boxing were not as strong as they might have been, and therefore he should increase his expectation of not getting the million. In other words, we have the “right” kind of correlation almost by definition, because “the way in which X causes C or D explicitly involves my choosing A or B.”
But the same thing can actually happen in the smoking case. If we say, “why are you using a decision theory which rigidly recommends smoking?” the answer might well be the (somewhat uncertain) strength of your desire to smoke. And to the degree that it is, whether you use this decision theory or some other will affect your actual expectation of having the lesion. And in this case, you should choose to use a decision theory which recommends not smoking. If the lesion is allowed to affect how I make my choice—which is absolutely necessary in the 100% case, and which is possible even in lower correlation cases—then the parallel between the Smoking Lesion and Newcomb is restored.
How confident are you that you understand TDT better than Eliezer does? Because he seems to think that TDT implies smoking.
people constantly said things like, “if the lesion has a 100% correlation, then you can’t make a choice anyway,” and things like that, which is an intuition about the causality. But obviously that is not true except in the sense that if Omega has a 100% correlation and makes a decision, you can longer make a choice in Newcomb either.
This looks to me like another thing that actually depends on the details of what the alleged cause is and how it does the causing, and I don’t think it’s at all clear that “if the lesion is 100% correlated with smoking, you can’t be making a real choice” and “if Omega’s choice of what to put in the boxes is 100% correlated with your choice of which box(es) to pick, you can’t be making a real choice” are equivalent.
For instance, suppose Omega chooses what to put in the boxes by looking into the future using magic. In this case, your choice is as real and free as any choice ever is. On the other hand, suppose the way the smoking lesion works is that everyone in the world has a little override built into their brain by a mad genius neuroscientist that detects situations in which they might choose whether to start smoking and forces them to do one thing rather than another, in a way that has absolutely nothing in common with any other decisions they make. In this case, your choice is as unreal and un-free as anything choice-like could ever be.
I dunno, maybe you don’t care about such intuitions, and you insist on a definition of terms like “choice” and “freedom” that’s perfectly clear and rigorous but doesn’t refer to any of these details that might differ in the two scenarios. I personally would be absolutely staggered if any such definition actually came close to matching how those words are actually used in practice. I don’t know how to give an adequate definition of any of those terms, though I have vague handwavy guesses at the sort of shape a definition might have once we know a lot more about brains and things, so in default of such a definition I’m going with my intuitions, and those say that the level of abstraction at which Newcomb and Smoking Lesion look the same is too high a level of abstraction to enable us to answer questions like “is this person really making a choice?”.
And this is really no different from the lesion, since if we are to imagine the lesion case working in real life, we need to include the relationship between the physical lesion and the rest of the brain.
My understanding of what I was supposed to assume about the Smoking Lesion was that it’s something rather crude that doesn’t depend on fine details of how the brain does whatever it does when making choices. But I think the how-the-correlation-works question is more central, so let’s move on.
There will be parallel cases for Smoking Lesion and for Newcomb
Sure. If you augment the SL and Newcomb scenarios with extra details about what’s going on, those extra details can matter. So, e.g., if Omega just predicts that you’ll take two boxes iff you’ve usually said “I would take two boxes” then you should probably say “I would take one box” and then take two boxes. But this version of Newcomb is utterly incompatible with how the Newcomb problem is always presented: in a world where Omega was known to operate this way, Omega’s success rate would be approximately 0% rather than the near-100% that makes the problem actually interesting. (Because everyone would say “I would take one box” and then everyone would take two boxes.)
So your version of Newcomb designed to yield a two-boxing decision succeeds in yielding a two-boxing decision, but only by not actually being a version of Newcomb in anything but name.
Your examples in which (if I’m understanding them correctly) someone, after being present with the SL/Newcomb choice, selects a decision theory and then applies it, seem very strange. I mean, I admit that what gets people thinking about decision theories is often knotty questions like Newcomb—but I’ve never heard of a case where someone got into decision theory in order to resolve such a knotty question with which they personally were faced and then did actually select and apply some concrete decision theory to resolve their question. (Though I wouldn’t be surprised to hear it’s happened at least once.) In that case, I agree that the choice of decision theory is all tangled up with the other factors involved (lesion, desires, preference for getting a million dollars, …); but I don’t see what this has to do with the rest of us who are contemplating decision theories in the abstract, nor to do with a hypothetical person in the SL/NP scenarios who, like most such hypothetical people :-), doesn’t react to their challenge by trying to select a decision theory to live by thenceforward.
I’m going to answer this with several comments, and probably not all today. In this one I am going to make some general points which are not necessarily directly addressed to particular comments you made, but which might show more clearly why I interpret the Smoking Lesion problem the way that I do, and in what sense I was discussing how the correlation comes about.
Eliezer used the Smoking Lesion as a counterexample to evidential decision theory. It it supposed to be a counterexample by providing a case where evidential decision theory recommends a bad course of action, namely not smoking when it would be better to smoke. He needed it as a counterexample because if there are no counterexamples, there is no need to come up with an alternative decision theory.
But the stipulation that evidential decision theory recommends not smoking requires us to interpret the situation in a very subtle way where it does not sound much like something that could happen in real life, rather than the crude way where we could easily understand it happening in real life.
Here is why. In order for EDT to recommmend not smoking, your actual credence that you have the lesion has to go up after you choose to smoke, and to go down after you choose not to smoke. That is, your honest evaluation of how likely you are to have the lesion has to change precisely because you made that choice.
Now suppose a case like the Smoking Lesion were to come up in real life. Someone like Eliezer could say, “Look, I’ve been explaining for years that you should choose smoking in these cases. So my credence that I have the lesion won’t change one iota after I choose to smoke. I know perfectly well that my choice has nothing to do with whether I have the lesion; it is because I am living according to my principles.” But if this is true, then EDT does not recommend not smoking anyway in his case. It only recommends not smoking if he will actually believe himself more likely to have the lesion, once he has chosen to smoke, than he did before he made that choice. And that means that he has not found any counterexample to EDT yet.
The need to find a counterexample absolutely excludes any kind of crude causality. If the lesion is supposed to override your normal process of choice, so that for example you start smoking without any real decision, then deciding to smoke will not increase a person’s credence that he has the lesion. In fact it might decrease it, when he sees that he made a decision in a normal way.
In a similar way, there might be a statistical association between choosing to smoke and the lesion, but it still will not increase a person’s credence that he has the lesion, if the association goes away after controlling for some other factor besides the choice, like desire for smoking. In order to have the counterexample, it has to be the case that as far as the person can tell, the correlation is directly between the lesion and the actual choice to smoke. This does not imply that any magic is happening—it refers to the state of the person’s knowledge. But he cannot have the ability to explain away the association so that his choice is clearly irrelevant; because if he does, EDT no longer recommends not smoking.
This is what I think is parallel to the fact that in Newcomb X’s causality is defined directly in relation to the choice of A or B, and makes the situations equivalent. In other words, I agree that in such an unusual situation EDT will recommend not smoking, but I disagree that there is anything wrong with that recommendation.
When Eliezer was originally discussing Newcomb, he posited a 100% correlation or virtually 100%, to make the situation more convincing. So if the Smoking Lesion is supposed to be a fair counterexample to EDT, we should do the same thing. So the best way to interpret the whole situation is like this:
The lesion has in the past had a 100% correlation with the actual choice to smoke, no matter how the particular person concluded that he should make that choice.
In every case, the person makes the choice in a manner which is psychologically normal. This is to ensure that it is not possible to remove the subjective correlation between actually choosing and the lesion, and consequent this stipulation is to prevent a person from not updating his credence based on his choice.
It cannot be said that these stipulations make the whole situation impossible, as long as we admit that a person’s choices, and also his mode of choice, are caused by the physical structure of the brain in any case. And even though they make the situation unlikely, this is no more the case than the equivalent stipulations in a Newcomb situation.
Nor can the response be that “you don’t have a real choice” in this situation. Even if we found out that this was true in some sense of choice, it would make no difference to the real experience of a person in this situation, which would be a normal experience of choice, and would be done in a normal manner and for normal reasons. On the contrary: you cannot get out of making a choice anymore than a determinist in real life has a realistic possibility of saying, “Now that I realize all my actions are determined, I don’t have to make choices anymore.”
EDT will indeed recommend not smoking in this situation, since clearly if you choose to smoke, you will conclude with high probability that you have the lesion, and if you choose not to smoke, you will conclude with high probability that you do not.
In order for Eliezer to have the counterexample, he needs to recommend smoking even in this situation. Presumably that would go something like this: “Look. I realize that after you follow my recommendation you will rightly conclude that you have the lesion. But we should ignore that, and consider it irrelevant, because we know that you can choose to smoke or not, while you cannot choose to have the lesion or not. So for the purposes of considering what to do, we should pretend that the choice won’t change our credence. So choose to smoke, since you prefer that in theory to not smoking. It’s just too bad that you will have to conclude that you have the lesion.”
In my opinion this would be just as wrong as the following:
“Look. I realize that after you follow my recommendation you will rightly conclude that the million is not in the box. But we should ignore that, and consider it irrelevant, because we know that you can choose to take one or two boxes, while you cannot choose to make the million be there or not. So for the purposes of considering what to do, we should pretend that the choice won’t change our credence. So take both boxes, since you would prefer the contents of both boxes to the contents of only one. It’s just too bad that you will have to conclude that the million isn’t there.”
Eliezer criticizes the “it’s just too bad” line of thinking by responding that you should strop trying to pretend it isn’t your fault, when you could have just taken one box. I say the same in the lesion case with the above stipulations: don’t pretend it isn’t your fault, when you could just decide not to smoke.
Now suppose a case like the Smoking Lesion were to come up in real life. Someone like Eliezer could say [...]
In other words, for some highly atypical people who have given a lot of explicit thought to situations like the Smoking Lesion one (and who, furthermore, strongly reject EDT), deciding to smoke wouldn’t be evidence of having the lesion and therefore the SL situation for them doesn’t work as a counterexample to EDT. I think I agree, but I don’t see why it matters.
there might be a statistical association between choosing to smoke and having the lesion, but it still will not increase a person’s credence that he has the lesion, if the association goes away after controlling for some factor besides the choice, such as desire for smoking.
Yes, I agree. Just to be clear, it seems like you’re arguing here for “EDT doesn’t necessarily say not to smoke” but elsewhere for “TDT probably says not to smoke”. Is that right? I find the first of these distinctly more plausible than the second, for what it’s worth.
So if the Smoking Lesion is supposed to be a fair counterexample to CDT, we should do the same thing [sc. posit a very-near-100% correlation].
I’m not sure I follow the logic. Even a well-sub-100% Smoking Lesion situation is (allegedly) a counterexample to EDT, and it’s not necessary to push the correlation up to almost 100% for it to serve this purpose; the reason why you need a correlation near to 100% for Newcomb is that what makes it plausible (to the chooser in the Newcomb situation) that Omega really can predict his choices is exactly the fact that the correlation is so strong. If it were much weaker, the chooser would be entirely within his rights to say “My prior against Omega having any substantial predictive ability is extremely strong; no one has shown me the sort of evidence that would change my mind about that; so I don’t think my choosing to two-box is strong evidence that Omega will leave the second box empty; so I shall take both boxes.”
It’s not clear to me that anything parallel is true about the Smoking Lesion scenario, so I don’t see why we “should” push the correlations to practically-100% in that case.
(But I don’t think what you’re saying particularly depends on the correlation being practically 100%.)
I’m not sure what TDT, or Eliezer, would say about your refined smoking-lesion situation. I will think a bit more about what I would say about it :-).
“For some highly atypical people...” The problem is that anyone who discusses this situation is a highly atypical person. And such people cannot imagine actually having a higher credence that they have the lesion, if they choose to smoke. This is why people advocate the smoking answer; and according to what I said in my other comment, it is not a “real Smoking Lesion problem” as long as they think that way, or at least they are not thinking of it as one (it could be that they are mistaken, and that they should have a higher credence, but don’t.)
Just to be clear, it seems like you’re arguing here for “EDT doesn’t necessarily say not to smoke” but elsewhere for “TDT probably says not to smoke”. Is that right? I find the first of these distinctly more plausible than the second, for what it’s worth.
What I meant was: in the situations people usually think about, or at least the way they are thinking about them, EDT doesn’t necessarily say not to smoke. But these are not the situations that are equivalent to the real Newcomb problem—these are equivalent to the fake Newcomb situations. EDT does say not to smoke in the situations which are actually equivalent to Newcomb. When I said “TDT probably says not to smoke,” I was referring to the actually equivalent situations. (Although as I said, I am less confident about TDT now; it may simply be incoherent or arbitrary.)
You don’t need to have a 100% correlation either for Newcomb or for the Smoking Lesion. But you are right that the reason for a near 100% correlation for Newcomb is to make the situation convincing to the chooser. But this is just to get him to admit that the million will actually be more likely to be there if he takes only one box. In the same way, theoretically you do not need it for the Smoking Lesion. But again, you have to convince the chooser that he personally will have a higher chance of having the lesion if he chooses to smoke, and it is hard to convince people of that. As someone remarked about people’s attitude on one of the threads about this, “So the correlation goes down from 100% to 99.9% and suddenly you consider yourself one of the 0.1%?” If anything, it seems harder to convince people they are in the true Smoking Lesion situation, than in the true Newcomb situation. People find Newcomb pretty plausible even if the correlation is 90%, if it is both for one-boxers and two-boxers, but a 90% correlation in the lesion case would leave a lot of person’s opinions about whether they have the lesion unchanged, no matter whether they choose to smoke or not.
(But I don’t think what you’re saying particularly depends on the correlation being practically 100%.)
Sure. If you augment the SL and Newcomb scenarios with extra details about what’s going on, those extra details can matter. So, e.g., if Omega just predicts that you’ll take two boxes iff you’ve usually said “I would take two boxes” then you should probably say “I would take one box” and then take two boxes. But this version of Newcomb is utterly incompatible with how the Newcomb problem is always presented: in a world where Omega was known to operate this way, Omega’s success rate would be approximately 0% rather than the near-100% that makes the problem actually interesting. (Because everyone would say “I would take one box” and then everyone would take two boxes.)
So your version of Newcomb designed to yield a two-boxing decision succeeds in yielding a two-boxing decision, but only by not actually being a version of Newcomb in anything but name.
I was not assuming a world where Omega was known to operate this way. I originally said that it matters how the choice got correlated with one-boxing, and this was an example. In order for it work, as you are pointing out, it has to be working without this mode of acting being known. in other words, suppose someone very wealthy comes forward and says that he is going to test the Newcomb problem in real life, and says that he will act as Omega. We don’t know what his method is, but it turns out that he has a statistically high rate of success. Now suppose you end up with insider knowledge that he is just judging based on a person’s past internet comments. It does not seem impossible that this could give a positive rate of success in the real world as long as it is unknown; presumably people who say they would one-box, would be more likely to actually one-box. (Example: Golden Balls was a game show involving the Prisoner’s dilemma. Before cooperating or defecting, the contestants were allowed to talk to each other for a certain period. People analyzing it afterwards determined that a person explicitly and directly saying “I will cooperate” had a 30% higher chance of actually cooperating; people who weren’t going to cooperate generally were vaguer about their intentions.) But once you end up with the insider knowledge, it makes sense to go around saying you will take only one box, and then take both anyway.
This happens because the correlation between your choice and the million is removed once you control for the past comments. The point of those examples was a correlation that you cannot separate between your choice and the million. For the Smoking Lesion to be equivalent, it has to be equally impossible to remove the correlation between your choice and the lesion, as I said in the long comment.
It does not seem impossible that this could give a positive rate of success in the real world
I don’t know about you, but for me to give serious consideration to one-boxing in a Newcomb situation the box-stuffer would need to have demonstrated something better than “a positive rate of success”. I agree that if I had insider knowledge that they were doing it by looking at people’s past internet comments then two-boxing would be rational, but I don’t think any advocates of one-boxing would disagree with that. The situation you’re describing just isn’t an actual Newcomb problem any more.
It seems very likely possible for a human to achieve, say, 75% success on both one-boxers and two-boxers, maybe not with such a simple rule, but certainly without an actual mind reading ability. If this is the case, then there must be plenty of one-boxers who would one-box against someone who was getting a 75% success rate, even if you aren’t one of them.
I don’t think any advocates of one-boxing would disagree with that. The situation you’re describing just isn’t an actual Newcomb problem any more.
I agree. That was the whole point. I was not trying to say that one-boxers would disagree, but that they would agree. The point is that to have an “actual Newcomb problem” your personal belief about whether you will get the million has to actually vary with your actual choice to take one or two boxes in the particular case; if your belief isn’t going to vary, even in the individual case, you will just take both boxes according to the argument, “I’ll get whatever I would have with one box, plus the thousand.”
I was simply saying that since Eliezer constructs the Smoking Lesion as a counterexample to EDT, we need to treat the “actual Smoking Lesion” in the same way: it is only the “actual Smoking Lesion problem” if your belief that you have the lesion is actually going to vary, depending on whether you choose to smoke or not.
How confident are you that you understand TDT better than Eliezer does? Because he seems to think that TDT implies smoking.
I don’t think I understand TDT better than Eliezer. I think that any sensible decision theory will give corresponding answers to Newcomb and the Smoking Lesion, and I am assuming that TDT is sensible. I do know that Eliezer is in favor both of one boxing and of cooperating in the Prisoner’s Dilemma, and both of those require the kind of reasoning that leads to not smoking. That is why I said that I “suspect” that TDT means not smoking.
I don’t think I understand TDT better than Eliezer. I think that any sensible decision theory will give corresponding answers to Newcomb and the Smoking Lesion, and I am assuming that TDT is sensible.
Since Eliezer is on record as saying that TDT advocates non-corresponding answers to Newcomb and the Smoking Lesion, it seems to me that you should at the very least be extremely uncertain about at least one of (1) whether TDT is actually sensible, (2) whether Eliezer actually understands his own theory, and (3) whether you are correct about sensible theories giving corresponding answers in those cases.
Because if sensible ⇒ corresponding answers and TDT is sensible, then it gives corresponding answers; and if Eliezer understands his own theory then it doesn’t give corresponding answers.
I looked back at some of Eliezer’s early posts on this and they certainly didn’t claim to be fully worked out; he said things like “this part is still magic,” and so on. However, I have significantly increased my estimate of the possibility that TDT might be incoherent, or at any rate arbitrary; he did seem to want to say that you would consider yourself the cause of the million being in the box, and I don’t think it is true in any non-arbitrary way that you should consider yourself the cause of the million, and not of whether you have the lesion. As an example (which is certainly very different from Eliezer saying it), bogus seemed to assert that it was just the presentation of the problem, namely whether you count yourself as being able to affect something or not.
I think that any sensible decision theory will give corresponding answers to Newcomb and the Smoking Lesion, and I am assuming that TDT is sensible.
I think you don’t quite understand either how TDT is supposed to work, or how the way it works can be “sensible”. If you exogenously alter every “smoke” decision to “don’t smoke” in Smoking Lesion, your payoff doesn’t improve, by construction. If you exogenously alter every “two-box” decision to “one box”, this does change your payoff. Note the ‘exogenously’ qualification above, which is quite important—and note that the “exogenous” change must alter all logically-connected choices in the same way: in Newcomb, the very same exogenous input acts on Omega’s prediction as on your actual choice; and in Smoking Lesion, the change to “smoke” or “don’t smoke” occurs regardless of whether you have the Smoking Lesion or not.
(It might be that you could express the problems in EDT in a way that leads to the correct choice, by adding hardwired models of these “exogenous but logically-connected” decisions. But this isn’t something that most EDT advocates would describe as a necessary part of that theory—and this is all the more true if a similar change could work for CDT!)
Omega’s prediction in reality is based on the physical state of your brain. So if altering your choice in Newcomb alters Omega’s prediction, it also alters the state of your brain. And if that is the case, it can alter the state of your brain when you choose not to smoke in the Smoking Lesion.
The ‘state of your brain’ in Newcomb and Smoking Lesion need not be directly comparable. If you could alter the state of your brain in a way that makes you better off in Smoking Lesion just by exogenously forcing the “don’t smoke” choice, then the problem statement wouldn’t be allowed to include the proviso that choosing “don’t smoke” doesn’t improve your payoff.
The problem statement does not include the proviso that choosing not to smoke does not improve the payoff. It just says that if you have the lesion, you get cancer, and if you don’t, you don’t. And it says that people who choose to smoke, turn out to have the lesion, and people who choose not to smoke, turn out not to have the lesion. No proviso about not smoking not improving the payoff.
You might be right. But then TDT chooses not to smoke precisely when CDT does, because there is nothing that’s logically-but-not-physically/causally connected with the exogenous decision whether or not to smoke. Which arguably makes this version of the problem quite uninteresting.
I’ve thought of a new way to think about the general case: call it the Alien Implant case. After people are dead (and only after they are dead) an autopsy of the brain reveals that there is a black box in their brains, thought to be implanted by aliens. There is a dial on it, set to A or B. All the people who have the dial set to A, during their lives chose to smoke, and got cancer. All the people who have the dial set to B, during their lives chose not to smoke, and did not get cancer.
It turns out that the box with the dial set to A causes cancer through a simple physical mechanism. Having the dial set to B does not have this effect.
I prefer smoking to not smoking, in general, but smoking and getting cancer would be worse than not smoking. What should I do?
“Not enough info” is now not a valid response, since I have to decide. I could try to estimate the probability that the aliens are predicting my choice, and the probability that they are causing it, and then use some complicated fake utility calculation (fake since it would not match what we would actually expect to be the outcome).
But it seems evident that is what Eliezer called a “ritual of cognition”; someone who cares only about the outcome, will just not smoke.
The blackmail letter has someone reading the AI agent’s source code to figure out what it would do, and therefore runs into the objection “you are asserting that the blackmailer can solve the Halting Problem”.
Somewhat. If it is known that the AI actually does not go into infinite loops, then this isn’t a problem—but this creates an interesting question as to how the AI is reasoning about the human’s behavior in a way that doesn’t lead to an infinite loop. One sort of answer we can give is that they’re doing logical reasoning about each other, rather than trying to run each other’s code. This could run into incompleteness problems, but not always:
I find this and the smoker’s lesion to have the same flaw, namely: it does not make sense to me to both suppose that the agent is using EDT, and suppose some biases in the agent’s decision-making. We can perhaps suppose that (in both cases) the agent’s preferences are what is affected (by the genes, or by the physics). But then, shouldn’t the agent be able to observe this (the “tickle defense”), at least indirectly through behavior? And won’t this make it act as CDT would act?
But: I find the blackmail letter to be a totally compelling case against EDT.
I agree with all of this, and I can’t understand why the Smoking Lesion is still seen as the standard counterexample to EDT.
Regarding the blackmail letter: I think that in principle, it should be possible to use a version of EDT that also chooses policies based on a prior instead of actions based on your current probability distribution. That would be “updateless EDT”, and I think it wouldn’t give in to Evidential Blackmail. So I think rather than an argument against EDT, it’s an argument in favor of updatelessness.
Smoking lesion is “seen as the standard counterexample” at least on LW pretty much because people wanted to agree with Eliezer.
It’s also considered the standard in the literature.
Evidence?
I think the fact that there was near universal agreement on LW that one-boxing is the right thing to do in Newcomb, and smoking in the Smoking Lesion, while Eliezer was the one who proposed this combination, together with the fact that this combination is an unusual one, is pretty good evidence that the desire to agree with Eliezer was involved.
Of course, it could be that his arguments were just very convincing, but given human nature that seems less likely.
I think that Newcomb and the Smoking Lesion are logically equivalent, and that you should therefore either both take one-box and not smoke, or take two boxes and smoke. I choose one-boxing and not smoking. The apparent logical equivalence here is more evidence that some non-logical desire (such as wanting to agree with Eliezer) is largely responsible. Of course the people who hold that position will say that they do not see this evidence (i.e. that there is no equivalence), but this is common in arguments that involve people’s motives in this way.
Well, Gary Drescher’s Good and Real endorses one-boxing and smoking (the latter, admittedly, in passing and in a footnote) and I’m pretty sure he didn’t get his ideas from Eliezer.
And the people you are talking about do not. Perhaps you would care to demonstrate the logical equivalence of the two scenarios? (Without, of course, appealing to any principles that make one or the other impossible.)
Let me summarize the situation here. Someone makes a series of arguments purporting to show that two possibly surprising things X and Y are both correct. A bunch of people, selected largely on the basis of having read these arguments and others that accompany them, mostly think that both X and Y are correct. You conclude that these people are motivated by wanting to agree with the guy who made the arguments, and dismiss the possibility that the arguments might be good ones, or at least convincing ones, out of hand on general principles.
Well, obviously you could be right. But I can’t say you’ve made a very compelling case.
(I offer the following alternative hypotheses, and remark that you have offered nothing resembling evidence against any of them. 1. Eliezer’s arguments for one-boxing and smoking are in fact very good arguments, and most people smart enough to understand them agree once they’ve read them. 2. This stuff is hard to think about and smart people who think hard about it don’t all come to the same conclusion; certain approaches to thinking lead naturally to finding Eliezer’s arguments convincing; the LW community consists of people who have that kind of brain, and for them even if not for everyone Eliezer’s arguments are convincing once read. 3. Eliezer’s arguments are very poor arguments, but anyone able to see how bad they are is likely to give up on LW having seen them, so the remaining LW community is made up of people too stupid to see their flaws.)
In any case, I wonder whether I misinterpreted what you meant by “seen as the standard counterexample”. Saying that A is seen as the standard counterexample to B means that (1) it’s seen as a counterexample and (2) other purported counterexamples are not preferred. I took you to be saying that #2 was the result of wanting to agree with Eliezer, but now it looks as if it was #1 you were referring to. That’s odd since the context was someone’s suggestion that something else was a better counterexample.
Let’s say we have a precondition, X. Assume that we know:
X causes me to choose A or B
X causes something else, condition C or D.
It does this in such a way that either I choose A, and condition C holds, or I choose B, and condition D holds.
I prefer A to B, in the abstract. But I would prefer B and D to A and C. What should I do?
If this question has an answer, Newcomb and the Smoking Lesion are equivalent. I can fill in the letters if needed but it should be obvious.
I don’t see any good reason to think that that question does have an answer that doesn’t depend on how those various kinds of causation operate. It’s also not clear to me exactly what X is supposed to be in the Newcomb case. The entire state of my brain and environment prior to my being presented with the two boxes, or something?
I think the question presents the information which Eliezer in fact used to conclude that he should take only one box in Newcomb, and if that is true, he should also have concluded that he should not smoke. It is true that if someone responds, “not enough info,” then he can say that Newcomb and the Smoking lesion differ. But in that case, what additional information are you asking for? You haven’t suggested any way to specify the additional info which supposedly would tell you whether to choose A or B. I would just say that I know without any additional info that I should choose B, because that I way I will get B and D, which I prefer to A and C. What extra factor is needed to get you to choose A, and what to choose B?
Yes, in the Newcomb case X will be the pre-existing state of your brain and environment, since this is the cause both of Omega’s decision whether to place the million in the box and of your choice.
I am not proposing a specific algorithm, so I don’t think I can give a very specific answer to that question. But I can say that my intuitions about whether to prefer A+C or B+D depend on what sort of thing X is and how it causes me to make the choices I do; things that seem particularly relevant here are that in the Newcomb case (1) X, as you’ve defined it, includes (or at least implies) all the details of my decision-making process and (2) the way in which X causes C or D explicitly involves my choosing A or B, whereas in the smoking lesion case neither of those is anything like true.
I think #2 is why Eliezer’s own pet decision theory gives different answers in the Newcomb and Smoking cases. If the two are, really, logically equivalent then you should be able to demonstrate an outright inconsistency in Eliezer’s “TDT”. Can you do that?
I have not seen a valid derivation of smoking from Eliezer’s TDT, so I am not saying that TDT is inconsistent. I suspect that TDT actually implies not smoking. The point of the generalization is that any decision theory that answers the question will say both [one-box, don’t smoke] or [two-box, smoke]. Causal decision theory does answer the question: it says that you aren’t responsible for C or D, and you prefer A to B, so do A. And non-causal decision theories in general will say do B, because you prefer B & D to A & C; I think this is probably true of TDT as well.
I agree with you that the reason some people want different answers is because of ideas about the causality there. When we previously had these discussions, people constantly said things like, “if the lesion has a 100% correlation, then you can’t make a choice anyway,” and things like that, which is an intuition about the causality. But obviously that is not true except in the sense that if Omega has a 100% correlation and makes a decision, you can longer make a choice in Newcomb either. In fact, I think that any time you add something relevant to the decision to the general case I presented, you can construct something parallel for the Smoking Lesion and for Newcomb.
What is included in the Newcomb case might depend on the particular case: if someone is absolutely determined to one-box no matter what the circumstances, then the state of his brain alone might be X. And this is really no different from the lesion, since if we are to imagine the lesion case working in real life, we need to include the relationship between the physical lesion and the rest of the brain. So the state of the brain might be sufficient for both, at least in some cases.
“The way in which X causes C or D explicitly involves my choosing A or B, whereas in the smoking lesion case neither of those is anything like true.” It does matter how the lesion gets correlated with smoking, just as it matters how Newcomb’s prediction gets correlated with one or two-boxing. This is why I prefer to discuss the case of 100% correlation first: because in this case, they have to be correlated in the right way in both cases.
Suppose there is some correlation but it is not 100%. There will be parallel cases for Smoking Lesion and for Newcomb where the correlation is not the right kind:
Smoking Lesion. Suppose the lesion is correlated with smoking only by causing you to have a desire for smoking. Then someone can say, “I have a strong desire to smoke. That means I probably have the lesion. But if I smoke, it doesn’t mean my desire is any stronger, since I already have that desire; so I might as well smoke.” Note that even evidential decision theory recommends smoking here, at least given that you can directly take note of the condition of your desire; if you can validly argue, “if I actually smoke, that suggests my desire was a bit stronger than I realized, so I will also be more likely to have the lesion,” that may change the situation (I’ll discuss this below).
Necomb. Suppose Omega’s prediction is correlated with one-boxing only by taking note of previous statements a person has made and determining whether most of them support one-boxing or two-boxing. Then someone can say, “Most of my statements in the past have supported one-boxing. So the million is probably in the box. So I might as well take both boxes. I will probably still get the million, since this will not affect the past statements that Omega is judging from.” Even evidential decision theory will recommend this course of action, and I think even Eliezer would agree that if we know for a fact that Omega is judging in this way, and we directly know the condition of our past statements, two-boxing is appropriate. But again, it is different if one can validly argue, “if I take two boxes now, that will likely mean my promotion of one-boxing in the past wasn’t quite as strong as I thought, so the million will be less likely to be there.” This kind of uncertainty may again change the situation in the same way as uncertainty about my desire above.
Suppose the correlation is not 100%, but we have one of the conditional situations mentioned above: where if I do A, I actually do increase my expectation of C, and if I do B, I actually do increase my expectation of D. This is the right kind of correlation. And in this case, evidential decision theory recommends doing B in both cases, and I think the reasons are parallel for Newcomb and for Smoking lesion. [Edit: obviously if the correlation is not 100% it will depend on the particular correlation and on concrete utilities; I ignored this for simplicity.]
But let’s consider another case where the correlation isn’t the right kind. Again, the lesion causes smoking by causing desire. And I am uncertain of exactly how strong my desire is, but I know I have some desire. Then it would appear at first that evidential decision theory recommends not smoking. But the situation will be changed if I can validly argue, “I am going to decide using some rigid decision theory that always recommends the same course of action in this situation. And this decision theory recommends smoking. This will imply in no way that my desire was any stronger, since it wasn’t the strength of the desire that led to it, but this rigid decision theory.” In that case, choosing to smoke will not increase your expectation that you have the lesion. And therefore even evidential decision theory will recommend smoking in this case.
Now it might seem difficult to construct a parallel for Newcomb here, and this is getting at what appears different to you: if someone says, “I am going to use a decision theory which rigidly recommends two-boxing,” that will suggest e.g. that even his previous statements promoting one-boxing were not as strong as they might have been, and therefore he should increase his expectation of not getting the million. In other words, we have the “right” kind of correlation almost by definition, because “the way in which X causes C or D explicitly involves my choosing A or B.”
But the same thing can actually happen in the smoking case. If we say, “why are you using a decision theory which rigidly recommends smoking?” the answer might well be the (somewhat uncertain) strength of your desire to smoke. And to the degree that it is, whether you use this decision theory or some other will affect your actual expectation of having the lesion. And in this case, you should choose to use a decision theory which recommends not smoking. If the lesion is allowed to affect how I make my choice—which is absolutely necessary in the 100% case, and which is possible even in lower correlation cases—then the parallel between the Smoking Lesion and Newcomb is restored.
How confident are you that you understand TDT better than Eliezer does? Because he seems to think that TDT implies smoking.
This looks to me like another thing that actually depends on the details of what the alleged cause is and how it does the causing, and I don’t think it’s at all clear that “if the lesion is 100% correlated with smoking, you can’t be making a real choice” and “if Omega’s choice of what to put in the boxes is 100% correlated with your choice of which box(es) to pick, you can’t be making a real choice” are equivalent.
For instance, suppose Omega chooses what to put in the boxes by looking into the future using magic. In this case, your choice is as real and free as any choice ever is. On the other hand, suppose the way the smoking lesion works is that everyone in the world has a little override built into their brain by a mad genius neuroscientist that detects situations in which they might choose whether to start smoking and forces them to do one thing rather than another, in a way that has absolutely nothing in common with any other decisions they make. In this case, your choice is as unreal and un-free as anything choice-like could ever be.
I dunno, maybe you don’t care about such intuitions, and you insist on a definition of terms like “choice” and “freedom” that’s perfectly clear and rigorous but doesn’t refer to any of these details that might differ in the two scenarios. I personally would be absolutely staggered if any such definition actually came close to matching how those words are actually used in practice. I don’t know how to give an adequate definition of any of those terms, though I have vague handwavy guesses at the sort of shape a definition might have once we know a lot more about brains and things, so in default of such a definition I’m going with my intuitions, and those say that the level of abstraction at which Newcomb and Smoking Lesion look the same is too high a level of abstraction to enable us to answer questions like “is this person really making a choice?”.
My understanding of what I was supposed to assume about the Smoking Lesion was that it’s something rather crude that doesn’t depend on fine details of how the brain does whatever it does when making choices. But I think the how-the-correlation-works question is more central, so let’s move on.
Sure. If you augment the SL and Newcomb scenarios with extra details about what’s going on, those extra details can matter. So, e.g., if Omega just predicts that you’ll take two boxes iff you’ve usually said “I would take two boxes” then you should probably say “I would take one box” and then take two boxes. But this version of Newcomb is utterly incompatible with how the Newcomb problem is always presented: in a world where Omega was known to operate this way, Omega’s success rate would be approximately 0% rather than the near-100% that makes the problem actually interesting. (Because everyone would say “I would take one box” and then everyone would take two boxes.)
So your version of Newcomb designed to yield a two-boxing decision succeeds in yielding a two-boxing decision, but only by not actually being a version of Newcomb in anything but name.
Your examples in which (if I’m understanding them correctly) someone, after being present with the SL/Newcomb choice, selects a decision theory and then applies it, seem very strange. I mean, I admit that what gets people thinking about decision theories is often knotty questions like Newcomb—but I’ve never heard of a case where someone got into decision theory in order to resolve such a knotty question with which they personally were faced and then did actually select and apply some concrete decision theory to resolve their question. (Though I wouldn’t be surprised to hear it’s happened at least once.) In that case, I agree that the choice of decision theory is all tangled up with the other factors involved (lesion, desires, preference for getting a million dollars, …); but I don’t see what this has to do with the rest of us who are contemplating decision theories in the abstract, nor to do with a hypothetical person in the SL/NP scenarios who, like most such hypothetical people :-), doesn’t react to their challenge by trying to select a decision theory to live by thenceforward.
I’m going to answer this with several comments, and probably not all today. In this one I am going to make some general points which are not necessarily directly addressed to particular comments you made, but which might show more clearly why I interpret the Smoking Lesion problem the way that I do, and in what sense I was discussing how the correlation comes about.
Eliezer used the Smoking Lesion as a counterexample to evidential decision theory. It it supposed to be a counterexample by providing a case where evidential decision theory recommends a bad course of action, namely not smoking when it would be better to smoke. He needed it as a counterexample because if there are no counterexamples, there is no need to come up with an alternative decision theory.
But the stipulation that evidential decision theory recommends not smoking requires us to interpret the situation in a very subtle way where it does not sound much like something that could happen in real life, rather than the crude way where we could easily understand it happening in real life.
Here is why. In order for EDT to recommmend not smoking, your actual credence that you have the lesion has to go up after you choose to smoke, and to go down after you choose not to smoke. That is, your honest evaluation of how likely you are to have the lesion has to change precisely because you made that choice.
Now suppose a case like the Smoking Lesion were to come up in real life. Someone like Eliezer could say, “Look, I’ve been explaining for years that you should choose smoking in these cases. So my credence that I have the lesion won’t change one iota after I choose to smoke. I know perfectly well that my choice has nothing to do with whether I have the lesion; it is because I am living according to my principles.” But if this is true, then EDT does not recommend not smoking anyway in his case. It only recommends not smoking if he will actually believe himself more likely to have the lesion, once he has chosen to smoke, than he did before he made that choice. And that means that he has not found any counterexample to EDT yet.
The need to find a counterexample absolutely excludes any kind of crude causality. If the lesion is supposed to override your normal process of choice, so that for example you start smoking without any real decision, then deciding to smoke will not increase a person’s credence that he has the lesion. In fact it might decrease it, when he sees that he made a decision in a normal way.
In a similar way, there might be a statistical association between choosing to smoke and the lesion, but it still will not increase a person’s credence that he has the lesion, if the association goes away after controlling for some other factor besides the choice, like desire for smoking. In order to have the counterexample, it has to be the case that as far as the person can tell, the correlation is directly between the lesion and the actual choice to smoke. This does not imply that any magic is happening—it refers to the state of the person’s knowledge. But he cannot have the ability to explain away the association so that his choice is clearly irrelevant; because if he does, EDT no longer recommends not smoking.
This is what I think is parallel to the fact that in Newcomb X’s causality is defined directly in relation to the choice of A or B, and makes the situations equivalent. In other words, I agree that in such an unusual situation EDT will recommend not smoking, but I disagree that there is anything wrong with that recommendation.
When Eliezer was originally discussing Newcomb, he posited a 100% correlation or virtually 100%, to make the situation more convincing. So if the Smoking Lesion is supposed to be a fair counterexample to EDT, we should do the same thing. So the best way to interpret the whole situation is like this:
The lesion has in the past had a 100% correlation with the actual choice to smoke, no matter how the particular person concluded that he should make that choice.
In every case, the person makes the choice in a manner which is psychologically normal. This is to ensure that it is not possible to remove the subjective correlation between actually choosing and the lesion, and consequent this stipulation is to prevent a person from not updating his credence based on his choice.
It cannot be said that these stipulations make the whole situation impossible, as long as we admit that a person’s choices, and also his mode of choice, are caused by the physical structure of the brain in any case. And even though they make the situation unlikely, this is no more the case than the equivalent stipulations in a Newcomb situation.
Nor can the response be that “you don’t have a real choice” in this situation. Even if we found out that this was true in some sense of choice, it would make no difference to the real experience of a person in this situation, which would be a normal experience of choice, and would be done in a normal manner and for normal reasons. On the contrary: you cannot get out of making a choice anymore than a determinist in real life has a realistic possibility of saying, “Now that I realize all my actions are determined, I don’t have to make choices anymore.”
EDT will indeed recommend not smoking in this situation, since clearly if you choose to smoke, you will conclude with high probability that you have the lesion, and if you choose not to smoke, you will conclude with high probability that you do not.
In order for Eliezer to have the counterexample, he needs to recommend smoking even in this situation. Presumably that would go something like this: “Look. I realize that after you follow my recommendation you will rightly conclude that you have the lesion. But we should ignore that, and consider it irrelevant, because we know that you can choose to smoke or not, while you cannot choose to have the lesion or not. So for the purposes of considering what to do, we should pretend that the choice won’t change our credence. So choose to smoke, since you prefer that in theory to not smoking. It’s just too bad that you will have to conclude that you have the lesion.”
In my opinion this would be just as wrong as the following:
“Look. I realize that after you follow my recommendation you will rightly conclude that the million is not in the box. But we should ignore that, and consider it irrelevant, because we know that you can choose to take one or two boxes, while you cannot choose to make the million be there or not. So for the purposes of considering what to do, we should pretend that the choice won’t change our credence. So take both boxes, since you would prefer the contents of both boxes to the contents of only one. It’s just too bad that you will have to conclude that the million isn’t there.”
Eliezer criticizes the “it’s just too bad” line of thinking by responding that you should strop trying to pretend it isn’t your fault, when you could have just taken one box. I say the same in the lesion case with the above stipulations: don’t pretend it isn’t your fault, when you could just decide not to smoke.
In other words, for some highly atypical people who have given a lot of explicit thought to situations like the Smoking Lesion one (and who, furthermore, strongly reject EDT), deciding to smoke wouldn’t be evidence of having the lesion and therefore the SL situation for them doesn’t work as a counterexample to EDT. I think I agree, but I don’t see why it matters.
Yes, I agree. Just to be clear, it seems like you’re arguing here for “EDT doesn’t necessarily say not to smoke” but elsewhere for “TDT probably says not to smoke”. Is that right? I find the first of these distinctly more plausible than the second, for what it’s worth.
I’m not sure I follow the logic. Even a well-sub-100% Smoking Lesion situation is (allegedly) a counterexample to EDT, and it’s not necessary to push the correlation up to almost 100% for it to serve this purpose; the reason why you need a correlation near to 100% for Newcomb is that what makes it plausible (to the chooser in the Newcomb situation) that Omega really can predict his choices is exactly the fact that the correlation is so strong. If it were much weaker, the chooser would be entirely within his rights to say “My prior against Omega having any substantial predictive ability is extremely strong; no one has shown me the sort of evidence that would change my mind about that; so I don’t think my choosing to two-box is strong evidence that Omega will leave the second box empty; so I shall take both boxes.”
It’s not clear to me that anything parallel is true about the Smoking Lesion scenario, so I don’t see why we “should” push the correlations to practically-100% in that case.
(But I don’t think what you’re saying particularly depends on the correlation being practically 100%.)
I’m not sure what TDT, or Eliezer, would say about your refined smoking-lesion situation. I will think a bit more about what I would say about it :-).
“For some highly atypical people...” The problem is that anyone who discusses this situation is a highly atypical person. And such people cannot imagine actually having a higher credence that they have the lesion, if they choose to smoke. This is why people advocate the smoking answer; and according to what I said in my other comment, it is not a “real Smoking Lesion problem” as long as they think that way, or at least they are not thinking of it as one (it could be that they are mistaken, and that they should have a higher credence, but don’t.)
What I meant was: in the situations people usually think about, or at least the way they are thinking about them, EDT doesn’t necessarily say not to smoke. But these are not the situations that are equivalent to the real Newcomb problem—these are equivalent to the fake Newcomb situations. EDT does say not to smoke in the situations which are actually equivalent to Newcomb. When I said “TDT probably says not to smoke,” I was referring to the actually equivalent situations. (Although as I said, I am less confident about TDT now; it may simply be incoherent or arbitrary.)
You don’t need to have a 100% correlation either for Newcomb or for the Smoking Lesion. But you are right that the reason for a near 100% correlation for Newcomb is to make the situation convincing to the chooser. But this is just to get him to admit that the million will actually be more likely to be there if he takes only one box. In the same way, theoretically you do not need it for the Smoking Lesion. But again, you have to convince the chooser that he personally will have a higher chance of having the lesion if he chooses to smoke, and it is hard to convince people of that. As someone remarked about people’s attitude on one of the threads about this, “So the correlation goes down from 100% to 99.9% and suddenly you consider yourself one of the 0.1%?” If anything, it seems harder to convince people they are in the true Smoking Lesion situation, than in the true Newcomb situation. People find Newcomb pretty plausible even if the correlation is 90%, if it is both for one-boxers and two-boxers, but a 90% correlation in the lesion case would leave a lot of person’s opinions about whether they have the lesion unchanged, no matter whether they choose to smoke or not.
This is correct.
I was not assuming a world where Omega was known to operate this way. I originally said that it matters how the choice got correlated with one-boxing, and this was an example. In order for it work, as you are pointing out, it has to be working without this mode of acting being known. in other words, suppose someone very wealthy comes forward and says that he is going to test the Newcomb problem in real life, and says that he will act as Omega. We don’t know what his method is, but it turns out that he has a statistically high rate of success. Now suppose you end up with insider knowledge that he is just judging based on a person’s past internet comments. It does not seem impossible that this could give a positive rate of success in the real world as long as it is unknown; presumably people who say they would one-box, would be more likely to actually one-box. (Example: Golden Balls was a game show involving the Prisoner’s dilemma. Before cooperating or defecting, the contestants were allowed to talk to each other for a certain period. People analyzing it afterwards determined that a person explicitly and directly saying “I will cooperate” had a 30% higher chance of actually cooperating; people who weren’t going to cooperate generally were vaguer about their intentions.) But once you end up with the insider knowledge, it makes sense to go around saying you will take only one box, and then take both anyway.
This happens because the correlation between your choice and the million is removed once you control for the past comments. The point of those examples was a correlation that you cannot separate between your choice and the million. For the Smoking Lesion to be equivalent, it has to be equally impossible to remove the correlation between your choice and the lesion, as I said in the long comment.
I don’t know about you, but for me to give serious consideration to one-boxing in a Newcomb situation the box-stuffer would need to have demonstrated something better than “a positive rate of success”. I agree that if I had insider knowledge that they were doing it by looking at people’s past internet comments then two-boxing would be rational, but I don’t think any advocates of one-boxing would disagree with that. The situation you’re describing just isn’t an actual Newcomb problem any more.
It seems very likely possible for a human to achieve, say, 75% success on both one-boxers and two-boxers, maybe not with such a simple rule, but certainly without an actual mind reading ability. If this is the case, then there must be plenty of one-boxers who would one-box against someone who was getting a 75% success rate, even if you aren’t one of them.
I agree. That was the whole point. I was not trying to say that one-boxers would disagree, but that they would agree. The point is that to have an “actual Newcomb problem” your personal belief about whether you will get the million has to actually vary with your actual choice to take one or two boxes in the particular case; if your belief isn’t going to vary, even in the individual case, you will just take both boxes according to the argument, “I’ll get whatever I would have with one box, plus the thousand.”
I was simply saying that since Eliezer constructs the Smoking Lesion as a counterexample to EDT, we need to treat the “actual Smoking Lesion” in the same way: it is only the “actual Smoking Lesion problem” if your belief that you have the lesion is actually going to vary, depending on whether you choose to smoke or not.
I don’t think I understand TDT better than Eliezer. I think that any sensible decision theory will give corresponding answers to Newcomb and the Smoking Lesion, and I am assuming that TDT is sensible. I do know that Eliezer is in favor both of one boxing and of cooperating in the Prisoner’s Dilemma, and both of those require the kind of reasoning that leads to not smoking. That is why I said that I “suspect” that TDT means not smoking.
Since Eliezer is on record as saying that TDT advocates non-corresponding answers to Newcomb and the Smoking Lesion, it seems to me that you should at the very least be extremely uncertain about at least one of (1) whether TDT is actually sensible, (2) whether Eliezer actually understands his own theory, and (3) whether you are correct about sensible theories giving corresponding answers in those cases.
Because if sensible ⇒ corresponding answers and TDT is sensible, then it gives corresponding answers; and if Eliezer understands his own theory then it doesn’t give corresponding answers.
I looked back at some of Eliezer’s early posts on this and they certainly didn’t claim to be fully worked out; he said things like “this part is still magic,” and so on. However, I have significantly increased my estimate of the possibility that TDT might be incoherent, or at any rate arbitrary; he did seem to want to say that you would consider yourself the cause of the million being in the box, and I don’t think it is true in any non-arbitrary way that you should consider yourself the cause of the million, and not of whether you have the lesion. As an example (which is certainly very different from Eliezer saying it), bogus seemed to assert that it was just the presentation of the problem, namely whether you count yourself as being able to affect something or not.
I think you don’t quite understand either how TDT is supposed to work, or how the way it works can be “sensible”. If you exogenously alter every “smoke” decision to “don’t smoke” in Smoking Lesion, your payoff doesn’t improve, by construction. If you exogenously alter every “two-box” decision to “one box”, this does change your payoff. Note the ‘exogenously’ qualification above, which is quite important—and note that the “exogenous” change must alter all logically-connected choices in the same way: in Newcomb, the very same exogenous input acts on Omega’s prediction as on your actual choice; and in Smoking Lesion, the change to “smoke” or “don’t smoke” occurs regardless of whether you have the Smoking Lesion or not.
(It might be that you could express the problems in EDT in a way that leads to the correct choice, by adding hardwired models of these “exogenous but logically-connected” decisions. But this isn’t something that most EDT advocates would describe as a necessary part of that theory—and this is all the more true if a similar change could work for CDT!)
Omega’s prediction in reality is based on the physical state of your brain. So if altering your choice in Newcomb alters Omega’s prediction, it also alters the state of your brain. And if that is the case, it can alter the state of your brain when you choose not to smoke in the Smoking Lesion.
The ‘state of your brain’ in Newcomb and Smoking Lesion need not be directly comparable. If you could alter the state of your brain in a way that makes you better off in Smoking Lesion just by exogenously forcing the “don’t smoke” choice, then the problem statement wouldn’t be allowed to include the proviso that choosing “don’t smoke” doesn’t improve your payoff.
The problem statement does not include the proviso that choosing not to smoke does not improve the payoff. It just says that if you have the lesion, you get cancer, and if you don’t, you don’t. And it says that people who choose to smoke, turn out to have the lesion, and people who choose not to smoke, turn out not to have the lesion. No proviso about not smoking not improving the payoff.
You might be right. But then TDT chooses not to smoke precisely when CDT does, because there is nothing that’s logically-but-not-physically/causally connected with the exogenous decision whether or not to smoke. Which arguably makes this version of the problem quite uninteresting.
I’ve thought of a new way to think about the general case: call it the Alien Implant case. After people are dead (and only after they are dead) an autopsy of the brain reveals that there is a black box in their brains, thought to be implanted by aliens. There is a dial on it, set to A or B. All the people who have the dial set to A, during their lives chose to smoke, and got cancer. All the people who have the dial set to B, during their lives chose not to smoke, and did not get cancer.
It turns out that the box with the dial set to A causes cancer through a simple physical mechanism. Having the dial set to B does not have this effect.
I prefer smoking to not smoking, in general, but smoking and getting cancer would be worse than not smoking. What should I do?
“Not enough info” is now not a valid response, since I have to decide. I could try to estimate the probability that the aliens are predicting my choice, and the probability that they are causing it, and then use some complicated fake utility calculation (fake since it would not match what we would actually expect to be the outcome).
But it seems evident that is what Eliezer called a “ritual of cognition”; someone who cares only about the outcome, will just not smoke.
near universal :-P
The blackmail letter has someone reading the AI agent’s source code to figure out what it would do, and therefore runs into the objection “you are asserting that the blackmailer can solve the Halting Problem”.
Somewhat. If it is known that the AI actually does not go into infinite loops, then this isn’t a problem—but this creates an interesting question as to how the AI is reasoning about the human’s behavior in a way that doesn’t lead to an infinite loop. One sort of answer we can give is that they’re doing logical reasoning about each other, rather than trying to run each other’s code. This could run into incompleteness problems, but not always:
http://intelligence.org/files/ParametricBoundedLobsTheorem.pdf