Still super confused why you are trying this convoluted and complicated approach, or mixing this problem with Newcomb’s, while the useful calculation is very straightforward. To quote from my old post here:
An agent is debating whether or not to smoke. She knows that smoking is correlated with an invariably fatal variety of lung cancer, but the correlation is (in this imaginary world) entirely due to a common cause: an arterial lesion that causes those afflicted with it to love smoking and also (99% of the time) causes them to develop lung cancer. There is no direct causal link between smoking and lung cancer. Agents without this lesion contract lung cancer only 1% of the time, and an agent can neither directly observe, nor control whether she suffers from the lesion. The agent gains utility equivalent to $1,000 by smoking (regardless of whether she dies soon), and gains utility equivalent to $1,000,000 if she doesn’t die of cancer. Should she smoke, or refrain?
There are 8 possible worlds here, with different utilities and probabilities:
An agent who “decides” to smoke has higher expected utility than the one who decides not to, and this “decision” lets us learn which of the 4 possible worlds could be actual, and eventually when she gets the test results we learn which one is the actual world.
Note that the analysis would be exactly the same if there was a “direct causal link between desire for smoking and lung cancer”, without any “arterial lesion”. In the problem as stated there is no way to distinguish between the two, since there are no other observable consequences of the lesion. There is 99% correlation between the desire to smoke and and cancer, and that’s the only thing that matters. Whether there is a “common cause” or cancer causes the desire to smoke, or desire to smoke causes cancer is irrelevant in this setup. It may become relevant if there were a way to affect this correlation, say, by curing the lesion, but it is not in the problem as stated.
Anyway, the interesting worlds are those where smoking adds utility, since there is no reason for the agent to consider smoking in the worlds where she has no lesion.
That’s not how the problem is usually interpreted. But you’re also giving everyone without the lesion 0 utility for not having cancer? Why are you doing this and why include these cells if you are going to zero them out??? (And your criticism is that I am making things overly hard to follow!!!)
I agree that some of the rows are there just for completeness (they are possible worlds after all), not because they are interesting in the problem setup. How is the problem normally interpreted? The description in your link is underspecified.
But you’re also giving everyone without the lesion 0 utility for not having cancer?
Good point. Should have put the missing million there. Wouldn’t have made a difference in this setup, since the agent would not consider taking up smoking if they have no lesion, but in a different setup, where smoking brings pleasure to those without the lesion, and the probability of the lesion is specified, the probabilities and utilities for each possible world are to be evaluated accordingly.
The problem is usually set up so that they gain utility from smoking, but choose not to smoke.
In any case, you seem to have ignored the part of the problem where smoking increases chance of the lesion and hence cancer. So there seems to be some implicit normalisation? What’s your exact process there?
The problem is usually set up so that they gain utility from smoking, but choose not to smoke.
Well, I went by the setup presented in the FDT paper (which is terrifyingly vague in most of the examples while purporting to be mathematically precise), and it clearly says that only those with the lesion love smoking. Again, if the setup is different, the numbers would be different.
In any case, you seem to have ignored the part of the problem where smoking increases chance of the lesion and hence cancer. So there seems to be some implicit normalisation? What’s your exact process there?
Smoking does not increase the chances of the lesion in this setup! From the FDT paper:
an arterial lesion that causes those afflicted with it to love smoking and also (99% of the time) causes them to develop lung cancer. There is no direct causal link between smoking and lung cancer.
Admittedly they could have been clearer, but I still think you’re misinterpreting the FDT paper. Sorry, what I meant was that smoking was correlated with an increased chance of cancer. Not that there was any causal link.
Right, sorry, I let my frustration get the best of me. I possibly am misinterpreting the FDT paper, though I am not sure where and how.
To answer your question, yes, obviously desire to smoke is correlated with the increased chance of cancer, through the common cause. If those without the lesion got utility from smoking (contrary to what the FDT paper stipulates), then the columns 3,4 and 7,8 would become relevant, definitely. We can then assign the probabilities and utilities as appropriate. What is the formulation of the problem that you have in mind?
Smoking lesion is an interesting problem in that it’s really not that well defined. If an FDT agent is making the decision, then its reference class should be other FDT agents, so all agents in the same class make the same decision, contrary to the lesion which should affect the probability. The approach that both of us take is to break the causal link from the lesion to your decision. I really didn’t express my criticism well above, because what I said also kind of applies to my post. However, the difference is that you are engaging in world counting and in world counting you should see the linkage, while my approach involves explicitly reinterpreting the problem to break the linkage. So my issue is that there seems to be some preprocessing happening before world counting and this means that your approach isn’t just a matter of world counting as you claim. In other words, it doesn’t match the label on the tin.
Smoking lesion is an interesting problem in that it’s really not that well defined. If an FDT agent is making the decision, then its reference class should be other FDT agents, so all agents in the same class make the same decision, contrary to the lesion.
Wha...? Isn’t like saying that Newcomb’s is not well defined? In the smoking lesion problem there is only one decision that gives you highest expected utility, no?
Also, why are you ignoring the $1000 checkup cost in the cosmic ray problem? That’s the correct way to reason, but you haven’t provided a justification for it.
do the opposite of what she would have done otherwise
might mean, and came up with
Deciding and attempting to do X, but ending up doing the opposite of X and realizing it after the fact.
which does not depend on which decision is made, and so the checkup cost has no bearing on the decision. Again, if you want to specify the problem differently but still precisely (as in, where it is possible to write an algorithm that would unambiguously calculate expected utilities given the inputs), by all means, do, we can apply the same approach to your favorite setup.
My issue is that you are doing implicit pre-processing on some of these problems and sweeping it under the rug. Do you actually have any kind of generalised scheme, including all pre-processing steps?
I… do not follow. Unlike the FDT paper, I try to write out every assumption. I certainly may have missed something, but it is not clear to me what. Can you point out something specific? I have explained the missing $1000 checkup cost: it has no bearing on decision making because the cosmic ray strike making one somehow do the opposite of what they intended and hence go and get examined can happen with equal (if small) probability whether they take $1 or $100. If the cosmic ray strikes only those who take $100, or if those who take $100 while intending to take $1 do not bother with the checkup, this can certainly be included in the calculations.
I know that you removed the $1000 in that case. But what is the general algorithm or rule that causes you to remove the $1000? What if the hospital cost $999 if you chose $1 or $1000 otherwise.
I guess it seems to me that once you’ve removed the $1000 you’ve removed the challenging element of the problem, so solving it doesn’t count for very much.
Still super confused why you are trying this convoluted and complicated approach, or mixing this problem with Newcomb’s, while the useful calculation is very straightforward. To quote from my old post here:
An agent is debating whether or not to smoke. She knows that smoking is correlated with an invariably fatal variety of lung cancer, but the correlation is (in this imaginary world) entirely due to a common cause: an arterial lesion that causes those afflicted with it to love smoking and also (99% of the time) causes them to develop lung cancer. There is no direct causal link between smoking and lung cancer. Agents without this lesion contract lung cancer only 1% of the time, and an agent can neither directly observe, nor control whether she suffers from the lesion. The agent gains utility equivalent to $1,000 by smoking (regardless of whether she dies soon), and gains utility equivalent to $1,000,000 if she doesn’t die of cancer. Should she smoke, or refrain?
There are 8 possible worlds here, with different utilities and probabilities:
An agent who “decides” to smoke has higher expected utility than the one who decides not to, and this “decision” lets us learn which of the 4 possible worlds could be actual, and eventually when she gets the test results we learn which one is the actual world.
Note that the analysis would be exactly the same if there was a “direct causal link between desire for smoking and lung cancer”, without any “arterial lesion”. In the problem as stated there is no way to distinguish between the two, since there are no other observable consequences of the lesion. There is 99% correlation between the desire to smoke and and cancer, and that’s the only thing that matters. Whether there is a “common cause” or cancer causes the desire to smoke, or desire to smoke causes cancer is irrelevant in this setup. It may become relevant if there were a way to affect this correlation, say, by curing the lesion, but it is not in the problem as stated.
I can’t make sense out of that table. Why isn’t everyone smoking gaining 1000 utility?
Because the problem states that only those afflicted with the lesion would gain utility from smoking:
Anyway, the interesting worlds are those where smoking adds utility, since there is no reason for the agent to consider smoking in the worlds where she has no lesion.
That’s not how the problem is usually interpreted. But you’re also giving everyone without the lesion 0 utility for not having cancer? Why are you doing this and why include these cells if you are going to zero them out??? (And your criticism is that I am making things overly hard to follow!!!)
I agree that some of the rows are there just for completeness (they are possible worlds after all), not because they are interesting in the problem setup. How is the problem normally interpreted? The description in your link is underspecified.
Good point. Should have put the missing million there. Wouldn’t have made a difference in this setup, since the agent would not consider taking up smoking if they have no lesion, but in a different setup, where smoking brings pleasure to those without the lesion, and the probability of the lesion is specified, the probabilities and utilities for each possible world are to be evaluated accordingly.
The problem is usually set up so that they gain utility from smoking, but choose not to smoke.
In any case, you seem to have ignored the part of the problem where smoking increases chance of the lesion and hence cancer. So there seems to be some implicit normalisation? What’s your exact process there?
Well, I went by the setup presented in the FDT paper (which is terrifyingly vague in most of the examples while purporting to be mathematically precise), and it clearly says that only those with the lesion love smoking. Again, if the setup is different, the numbers would be different.
Smoking does not increase the chances of the lesion in this setup! From the FDT paper:
Admittedly they could have been clearer, but I still think you’re misinterpreting the FDT paper. Sorry, what I meant was that smoking was correlated with an increased chance of cancer. Not that there was any causal link.
Right, sorry, I let my frustration get the best of me. I possibly am misinterpreting the FDT paper, though I am not sure where and how.
To answer your question, yes, obviously desire to smoke is correlated with the increased chance of cancer, through the common cause. If those without the lesion got utility from smoking (contrary to what the FDT paper stipulates), then the columns 3,4 and 7,8 would become relevant, definitely. We can then assign the probabilities and utilities as appropriate. What is the formulation of the problem that you have in mind?
Smoking lesion is an interesting problem in that it’s really not that well defined. If an FDT agent is making the decision, then its reference class should be other FDT agents, so all agents in the same class make the same decision, contrary to the lesion which should affect the probability. The approach that both of us take is to break the causal link from the lesion to your decision. I really didn’t express my criticism well above, because what I said also kind of applies to my post. However, the difference is that you are engaging in world counting and in world counting you should see the linkage, while my approach involves explicitly reinterpreting the problem to break the linkage. So my issue is that there seems to be some preprocessing happening before world counting and this means that your approach isn’t just a matter of world counting as you claim. In other words, it doesn’t match the label on the tin.
Wha...? Isn’t like saying that Newcomb’s is not well defined? In the smoking lesion problem there is only one decision that gives you highest expected utility, no?
Also, why are you ignoring the $1000 checkup cost in the cosmic ray problem? That’s the correct way to reason, but you haven’t provided a justification for it.
I tried to operationalize what
might mean, and came up with
which does not depend on which decision is made, and so the checkup cost has no bearing on the decision. Again, if you want to specify the problem differently but still precisely (as in, where it is possible to write an algorithm that would unambiguously calculate expected utilities given the inputs), by all means, do, we can apply the same approach to your favorite setup.
My issue is that you are doing implicit pre-processing on some of these problems and sweeping it under the rug. Do you actually have any kind of generalised scheme, including all pre-processing steps?
I… do not follow. Unlike the FDT paper, I try to write out every assumption. I certainly may have missed something, but it is not clear to me what. Can you point out something specific? I have explained the missing $1000 checkup cost: it has no bearing on decision making because the cosmic ray strike making one somehow do the opposite of what they intended and hence go and get examined can happen with equal (if small) probability whether they take $1 or $100. If the cosmic ray strikes only those who take $100, or if those who take $100 while intending to take $1 do not bother with the checkup, this can certainly be included in the calculations.
I know that you removed the $1000 in that case. But what is the general algorithm or rule that causes you to remove the $1000? What if the hospital cost $999 if you chose $1 or $1000 otherwise.
I guess it seems to me that once you’ve removed the $1000 you’ve removed the challenging element of the problem, so solving it doesn’t count for very much.
Let’s try to back up a bit. What, in your mind, does the sentence
mean observationally? What does the agent intend to do and what does actually happen?