how to respond to the temptation to shift from utilising actual peers to potential peers which then seems to reraise the specter of circularity.
I think you might be able to say something like “actual peers is why the rule was learned, virtual peers is because the rule was learned”.
(Just to be clear: I’m far from convinced that this is an actually good theory of counterfactuals, it’s just that it also doesn’t seem to be obviously terrible)
Let’s suppose I face Newcomb’s problem in a yellow shirt and you face it in a red shirt. They ought to be comparable because my shirt color doesn’t matter.
The definition of peers given is in terms of what actually happens, and the question of whether you think someone is your peer is put down to “induction somehow”. I think this approach has serious problems, but it does answer your question: whether we are peers depends on the results we get. How we should include the colour of our shirts in our decision making on the other hand is a question of what inductive assumptions we’re willing to make when assessing likely peers.
Actually, there’s an important asymmetry in terms of inductive assumptions in just about every situation that involves “I see you do x, then I do x”. The thing is, I almost always know more about why I am doing x than I do about why you are doing x. You might be two boxing because you signed a contract with the predictor beforehand where you agreed to two-box in exchange for an under the table payment of $2m, while I am two-boxing because I decided the dominance theory is compelling. I don’t know why you act and you don’t know why I act, but I know why I act and you know why you act.
Thus the case for epistemic indifference between “me about to choose, from my perspective” and “you about to choose, from my perspective” seems to be quite compromised, before we even consider what shirts we are wearing. And this is as it should be! Inferring causation from correlation is usually a bad move. Nonetheless, it seems unreasonable to think that I am so different to everyone else that I should two box in Newcomb’s problem w/a predictor that has a perfect track record.
The notion that “some of these people were in the same position I am in, though I don’t know who they are” seems pretty plausible, though I feel like it’s resting on some more fundamental assumptions that I’m not quite sure of right now.
I think this approach has serious problems, but it does answer your question: whether we are peers depends on the results we get.
Just to see I’m following correctly, you’re a peer if you obtain the same result in the same situation?
My point about yellow shirts and red shirts is that it isn’t immediately obvious what counts as the same situation. For example, if the problem involved Omega treating you differently by shirt color, then it would seem like we were in different situations. Maybe your response would be, you got a different result so you’re not a peer, no need to say it’s a different situation. I guess I would then question if someone one-boxing in Newcomb’s and someone opening a random box and finding $1 million would be peers just b/c they obtained the same result. I guess completely ignoring the situation would make the class of peers too wide.
Peers get the same results from the same actions. It’s not exactly clear what “same action” or “same result” means—is “one boxing on the 100th run” the same as “one boxing on the 101st run” or “box 100 with $1m in it” the same as “box 101 with $1m in it”? I think we should think of peers as being defined with respect to a particular choice of variables representing actions and results.
I think the definitions of these things aren’t immediately obvious, but it seems like we might be able to figure them out sometimes. Given a decision problem, it seems to me that “the things that I can do” and “the things that I care about” might often be known to me. It seems to be the case that I can also define some variables that represent copies of these things from your point of view, although it’s a bit less obvious how to do that.
If we think about predictions vs outcomes, we judge good predictions to be ones that have a good match to outcomes. Similarly, a “peer inference” is a bit like a prediction—I think this group of action-outcome pairs will be similar to my own—and the outcome that can be assessed at the end is whether they actually are similar to my own action and outcome. I can’t assess whether they “would have been” peers “had I taken a different action”, but maybe I don’t need to. For example: if I assess some group of people to be my peers relative to a particular decision problem, and all the people who take action 1 end up winners while all the people who take action 2 end up losers, and I take action 1 and end up a winner then I have done well relative to the group of people I assessed to be my peers, regardless of “what would have happened had I taken action 2″.
There is a sense in which I feel like peers ought to also be a group that it is relevant to judge myself against—I want to take actions that do better than my peers, in some sense. Maybe defining actions and outcomes addresses this concern? I’m not sure.
I think a substantial problem with this theory is the fact that I may often find that the group of peers for some problem contains only me, which leaves us without a useful definition for a counterfactual.
I think you might be able to say something like “actual peers is why the rule was learned, virtual peers is because the rule was learned”.
(Just to be clear: I’m far from convinced that this is an actually good theory of counterfactuals, it’s just that it also doesn’t seem to be obviously terrible)
The definition of peers given is in terms of what actually happens, and the question of whether you think someone is your peer is put down to “induction somehow”. I think this approach has serious problems, but it does answer your question: whether we are peers depends on the results we get. How we should include the colour of our shirts in our decision making on the other hand is a question of what inductive assumptions we’re willing to make when assessing likely peers.
Actually, there’s an important asymmetry in terms of inductive assumptions in just about every situation that involves “I see you do x, then I do x”. The thing is, I almost always know more about why I am doing x than I do about why you are doing x. You might be two boxing because you signed a contract with the predictor beforehand where you agreed to two-box in exchange for an under the table payment of $2m, while I am two-boxing because I decided the dominance theory is compelling. I don’t know why you act and you don’t know why I act, but I know why I act and you know why you act.
Thus the case for epistemic indifference between “me about to choose, from my perspective” and “you about to choose, from my perspective” seems to be quite compromised, before we even consider what shirts we are wearing. And this is as it should be! Inferring causation from correlation is usually a bad move. Nonetheless, it seems unreasonable to think that I am so different to everyone else that I should two box in Newcomb’s problem w/a predictor that has a perfect track record.
The notion that “some of these people were in the same position I am in, though I don’t know who they are” seems pretty plausible, though I feel like it’s resting on some more fundamental assumptions that I’m not quite sure of right now.
Just to see I’m following correctly, you’re a peer if you obtain the same result in the same situation?
My point about yellow shirts and red shirts is that it isn’t immediately obvious what counts as the same situation. For example, if the problem involved Omega treating you differently by shirt color, then it would seem like we were in different situations. Maybe your response would be, you got a different result so you’re not a peer, no need to say it’s a different situation. I guess I would then question if someone one-boxing in Newcomb’s and someone opening a random box and finding $1 million would be peers just b/c they obtained the same result. I guess completely ignoring the situation would make the class of peers too wide.
Peers get the same results from the same actions. It’s not exactly clear what “same action” or “same result” means—is “one boxing on the 100th run” the same as “one boxing on the 101st run” or “box 100 with $1m in it” the same as “box 101 with $1m in it”? I think we should think of peers as being defined with respect to a particular choice of variables representing actions and results.
I think the definitions of these things aren’t immediately obvious, but it seems like we might be able to figure them out sometimes. Given a decision problem, it seems to me that “the things that I can do” and “the things that I care about” might often be known to me. It seems to be the case that I can also define some variables that represent copies of these things from your point of view, although it’s a bit less obvious how to do that.
If we think about predictions vs outcomes, we judge good predictions to be ones that have a good match to outcomes. Similarly, a “peer inference” is a bit like a prediction—I think this group of action-outcome pairs will be similar to my own—and the outcome that can be assessed at the end is whether they actually are similar to my own action and outcome. I can’t assess whether they “would have been” peers “had I taken a different action”, but maybe I don’t need to. For example: if I assess some group of people to be my peers relative to a particular decision problem, and all the people who take action 1 end up winners while all the people who take action 2 end up losers, and I take action 1 and end up a winner then I have done well relative to the group of people I assessed to be my peers, regardless of “what would have happened had I taken action 2″.
There is a sense in which I feel like peers ought to also be a group that it is relevant to judge myself against—I want to take actions that do better than my peers, in some sense. Maybe defining actions and outcomes addresses this concern? I’m not sure.
I think a substantial problem with this theory is the fact that I may often find that the group of peers for some problem contains only me, which leaves us without a useful definition for a counterfactual.