what are the beliefs that I would need to hold for this sentence [about acausal consequences] to make sense, and how would they pay their rent?
the rent-paying thing is above my pay grade.
Above my pay grade too, but as I am an amateur, I won’t let that deter me.
First, you would need to believe that free will is an oversimplification. More specifically, that what may appear to be a free-will moral decision made today (about saving a murderer, say) is actually a decision the making of which is spread over your entire past life (for example, the point in your life where you formed moral opinions about murder, revenge, and so on). And not just spread over your life, but actually spread over the entire history of our species, in the course of which the genes and cultural traditions that contribute to your own moral intuitions were formed.
Second, you would have to believe that your moral decision today is so correlated to those aspects of the past, and those aspects are so correlated in a causal and deterrent way to the past behavior of potential murderers, that your decision nominally made today about punishing a murderer is correlated “kinda-sorta-causally” with the number of past murders.
And third, you would have to realize that if you actually refer to this as a “causal” relationship, you probably will no longer be invited to the best dinner parties, and therefore you would choose to call this relationship “acausal”. It is basically a way of signaling your own mental hygiene—scare quotes would also suffice, but the word acausal has become entrenched.
Rent paying. Hmmm. I’m going to get cute here and say that this holding these beliefs is not not really about paying your rent. It is about paying your taxes. Your duty to society and all that.
OK, I would say that your first belief implies that what appears to be a decision in this case is in fact not a decision, but rather a working out of the inevitable consequences of an earlier state. The only reason it seems like a decision is because I’m ignorant of the real possibilities.
The situation, on this account, is analogous to my “deciding” not to fly away when dropped off the top of a building. I can imagine being in a delusional state where it does not seem inevitable to me that I will fall to the ground, and I can further imagine being in a state where I believe I have decided to fall, but In both cases I would simply be wrong.
Similarly, on this account, my belief that I choose whether to save a murderer or not is simply wrong. There’s no actual decision to be made; that there seems to be a decision is simply a delusion shared by approximately everyone.
It’s like having an electrical switch connected to both a buzzer and a lightbulb, with a 3-second delay between the switch being flipped and the lightbulb going on. Clearly any intuition that the buzzer caused the light to turn on is just post hoc ergo prompter hoc run amok, but to start talking about the lightbulb having an acausal consequence of the buzzer going off seems downright unjustified.
It is not clear to me why that situation changes if the buzzer is a murder and the lightbulb is me saving the murderer, or if the buzzer is Omega putting money into boxes and the lightbulb is me opening the box. Sure, the underlying mechanisms are way more complex, but I don’t see how any of that complexity affects any of what we’re talking about, if we start from the assumption that free will is an illusion.
(shrug) Maybe the whole point here is to resurrect some notion of free will worth considering, or something… I dunno. I am still working my way fairly linearly through the old OB posts, and have not actually read any of the more recent TDT stuff, so probably I shouldn’t be trying to engage in these conversations just yet.
In a situation where other people can predict your future decisions, you can reason as if your decisions have backward causality, because they were predicted before you carried them out. It’s a generalization of the concept of making credible threats and promises.
But in a situation where other people can predict my future behavior, I can instead reason as if some earlier state causes both my behavior and their prediction.
This seems to get me all the same explanatory and predictive power in an entirely straightforward fashion. Making credible threats and promises seems entirely unproblematic when looked at that way.
I accept that some awfully smart people who have thought about this a lot have concluded that this sort of backward-causality reasoning really does buy them something, and I’m open to the possibility that it’s something worth buying. But at the moment I don’t see what it could possibly be.
You can explain and predict decisions as being implied by their antecedents, but you can’t use the same reasoning in the act of making a decision, because it leads to contradictions. This post contains the best explanation I could find.
I would say that your first belief implies that what appears to be a decision in this case is in fact not a decision, but rather a working out of the inevitable consequences of an earlier state.
Do you ever make a decision that is not like this?
I think that the official Less Wrong answer to the problem of free will is that you do make a decision, since it is the consequence of your state, but this is just as a computer may make a decision, say to allocate certain CPU cycles to certain processes (which is the sort of decision that modern operating system kernels are designed to make, and one that my computer is not making very wisely at the moment, which is why I thought of it). Given the input, this decision is inevitable, but it’s arguably still a decision.
For what it’s worth, I’m a compatibilist as well, although I don’t think it’s a particularly important question.
I’d merely meant to point out that if it’s possible (as stipulated in this example) to predict accurately at T1 what I’m going to do at T2, then there’s no new salient information added to the system after T1, so it’s as reasonable to talk about Omega’s behavior at T1 being determined by the state of the world at T1 as it is to talk about it as being determined retroactively by the state of the world at T2.
(Perplexed has since then pointed out that the second formulation is simpler in some sense, and therefore potentially useful, which I accept.)
That being said… as Perplexed articulates well here, it’s hard to understand the purpose of decision theory in the first place from a compatibilist or determinist stance.
The second formulation is simpler, but then leads to absurdities such as counterfactual mugging. This is a failure of the theory.
If you don’t think so, try a counterfactual mugging on everyday people, and then try it at a LessWrong meeting. Which group do you think will be more likely to come out ahead, in this practical example?
If some particular ritual of cognition—even one that you have long cherished as “rational”—systematically gives poorer results relative to some alternative, it is not rational to cling to it.
If you don’t think so, try a counterfactual mugging on everyday people, and then try it at a LessWrong meeting. Which group do you think will be more likely to come out ahead, in this practical example?
The Less Wrong meeting, of course. I’m no Omega, but I’m smart enough to predict none of the regular people will take the deal, and most of the Less Wrongers will. That means I won’t give any money to any everyday people, but after the coin flip I’ll be handing out a whole bunch of suitcases with $10000 to the Less Wrongers (while also collecting a few hundred dollar bills). The average person in the Less Wrong meeting will come out $4950 richer than the person on the street.
If you mean I should do the second part, the part where I take the money, but not the first part, then it’s no longer a counterfactual mugging. Then it’s just me lying to people in a particularly weird way. The Less Wrongers might do worse on the completely unrelated problem of whether they believe weird lies, but I don’t see much evidence for this.
If you don’t think so, try a counterfactual mugging on everyday people, and then try it at a LessWrong meeting. Which group do you think will be more likely to come out ahead, in this practical example?
You can’t “try a counterfactual mugging” unless you are Omega (or some other entity with a lot of money to throw away and some unusually and systematically accurate way of predicting people’s behaviour under counterfactual interventions).
And if you are… then those who are inclined to pay in counterfactual mugging will win more from it on average. That’s the whole point. If you accept the premises of the problem (Onega is honest and flipped a fair coin, etc.), paying really is the winningest thing to do.
The counterfactual mugging requires that the deal be offered by an entity that is known to be both perfectly honest and a perfect predictor. If Omega tries to counterfactually mug you, you should pay him. If I try to counterfactually mug you, paying up would be significantly less wise.
A sufficiently good decision theory should get both of those cases right.
if we start from the assumption that free will is an illusion...
Nominally, decision theory is all about giving good advice to people who make decisions.
Now I am willing to entertain the idea that the free will of the decision maker is an illusion.
But my ‘willing suspension of disbelief’ goes all to hell when I ask myself: “Why does an illusion need good advice?”
Decision theory absolutely requires an assumption that the will is free in some sense. However, it does seem reasonable to consider the possibility that free decision making can be spread out in time.
Traditional game theory assumes that an agent freely chooses a set of preferences over states-of-the-world well in advance. Then, at decision time, he chooses an action so as to maximize the probability of reaching a desirable state of the world. Classical game theory and decision theory offer advice on that second free decision, but they don’t advise on those earlier free decisions which created the preference schedule.
Perhaps they should.
Or, perhaps we need an additional theory, over and above game theory and decision theory, which will advise agents on how to set their preferences so as to take into account some of the side effects of those preferences. What do we call this new kind of normative theory? ‘Moral theory’, perhaps?
Um. Let me taboo some words (“free will,” “prediction”, “decision”) here and try again.
Let us suppose that at time T1 someone either commits murder (event E1a) or doesn’t (E1b), and at T2 I either spare the murderer (E2a) or don’t (E2b). (I don’t mean to suggest here that all combinations are possible.)
The original scenario seemed to presuppose that at T1 there is a fact of the matter about whether, given E1a, T2 contains E2a or E2b, and that some potential murderers are able to use that fact in their reasoning.
My understanding is that some people are saying we can therefore understand E2a|b to have some kind of “acausal” influence on E1a|b. (If that’s not true, then I’ve utterly misunderstood either the scenario or the conversation or both, which is entirely plausible.)
I agree with you that it’s useful to talk about what happens at T1 (or earlier)… that is, to advise about preference schedules. Indeed, as you suggest, it’s hard to see what the point of advising about anything else in this scenario would be.
But I don’t understand why it isn’t just as useful, to this end, to say that there exists at T1 some set of facts S describing the state of the world (including my mind), and that E2 a|b and E1 a|b both depend on S, as it is to say that E2a|b exerts an acausal atemporal influence on E1 a|b.
You ask me to compare two ways of saying something:
that there exists at T1 some set of facts S describing the state of the world (including my mind), and that E2a|b and E1a|b both depend on S
that E2a|b exerts an acausal atemporal influence on E1a|b
What, you ask, does that second formulation buy us? My answer, of course, is that the second is more concise and it avoids mentioning either T1 or S (good, among other reasons, because there might be multiple times T1, T2, and T3, as well as state vectors S1, S2, and S3 at those times). The second formulation is even more economical if you leave off that extraneous word “atemporal”.
What you pay for this economy is a prior climb up a pretty rugged learning curve. Is it worth it? It is hard to say at this point. However, I should point out the irony that I have cast myself in the role of a defender of all this “acausal” mumbo-jumbo; ironic because I usually play your role - a fierce skeptic of the local zeitgeist and defender of the old-fashioned, orthodox approach.
The only practical application of TDT that I’ve found is as an anti-procrastination argument. “If this excuse seems good enough to me, it’s probably going to seem about as good to me+15 minutes, so if I don’t get to work now I’m very unlikely to do so in 15 minutes either”. I still find the existence of significant acausal connections between different individuals to be pretty much claptrap.
Above my pay grade too, but as I am an amateur, I won’t let that deter me.
First, you would need to believe that free will is an oversimplification. More specifically, that what may appear to be a free-will moral decision made today (about saving a murderer, say) is actually a decision the making of which is spread over your entire past life (for example, the point in your life where you formed moral opinions about murder, revenge, and so on). And not just spread over your life, but actually spread over the entire history of our species, in the course of which the genes and cultural traditions that contribute to your own moral intuitions were formed.
Second, you would have to believe that your moral decision today is so correlated to those aspects of the past, and those aspects are so correlated in a causal and deterrent way to the past behavior of potential murderers, that your decision nominally made today about punishing a murderer is correlated “kinda-sorta-causally” with the number of past murders.
And third, you would have to realize that if you actually refer to this as a “causal” relationship, you probably will no longer be invited to the best dinner parties, and therefore you would choose to call this relationship “acausal”. It is basically a way of signaling your own mental hygiene—scare quotes would also suffice, but the word acausal has become entrenched.
Rent paying. Hmmm. I’m going to get cute here and say that this holding these beliefs is not not really about paying your rent. It is about paying your taxes. Your duty to society and all that.
Um?
OK, I would say that your first belief implies that what appears to be a decision in this case is in fact not a decision, but rather a working out of the inevitable consequences of an earlier state. The only reason it seems like a decision is because I’m ignorant of the real possibilities.
The situation, on this account, is analogous to my “deciding” not to fly away when dropped off the top of a building. I can imagine being in a delusional state where it does not seem inevitable to me that I will fall to the ground, and I can further imagine being in a state where I believe I have decided to fall, but In both cases I would simply be wrong.
Similarly, on this account, my belief that I choose whether to save a murderer or not is simply wrong. There’s no actual decision to be made; that there seems to be a decision is simply a delusion shared by approximately everyone.
It’s like having an electrical switch connected to both a buzzer and a lightbulb, with a 3-second delay between the switch being flipped and the lightbulb going on. Clearly any intuition that the buzzer caused the light to turn on is just post hoc ergo prompter hoc run amok, but to start talking about the lightbulb having an acausal consequence of the buzzer going off seems downright unjustified.
It is not clear to me why that situation changes if the buzzer is a murder and the lightbulb is me saving the murderer, or if the buzzer is Omega putting money into boxes and the lightbulb is me opening the box. Sure, the underlying mechanisms are way more complex, but I don’t see how any of that complexity affects any of what we’re talking about, if we start from the assumption that free will is an illusion.
(shrug) Maybe the whole point here is to resurrect some notion of free will worth considering, or something… I dunno. I am still working my way fairly linearly through the old OB posts, and have not actually read any of the more recent TDT stuff, so probably I shouldn’t be trying to engage in these conversations just yet.
But, well, sometimes I do things I shouldn’t.
In a situation where other people can predict your future decisions, you can reason as if your decisions have backward causality, because they were predicted before you carried them out. It’s a generalization of the concept of making credible threats and promises.
But in a situation where other people can predict my future behavior, I can instead reason as if some earlier state causes both my behavior and their prediction.
This seems to get me all the same explanatory and predictive power in an entirely straightforward fashion. Making credible threats and promises seems entirely unproblematic when looked at that way.
I accept that some awfully smart people who have thought about this a lot have concluded that this sort of backward-causality reasoning really does buy them something, and I’m open to the possibility that it’s something worth buying. But at the moment I don’t see what it could possibly be.
You can explain and predict decisions as being implied by their antecedents, but you can’t use the same reasoning in the act of making a decision, because it leads to contradictions. This post contains the best explanation I could find.
Do you ever make a decision that is not like this?
I think that the official Less Wrong answer to the problem of free will is that you do make a decision, since it is the consequence of your state, but this is just as a computer may make a decision, say to allocate certain CPU cycles to certain processes (which is the sort of decision that modern operating system kernels are designed to make, and one that my computer is not making very wisely at the moment, which is why I thought of it). Given the input, this decision is inevitable, but it’s arguably still a decision.
For what it’s worth, I’m a compatibilist as well, although I don’t think it’s a particularly important question.
I’d merely meant to point out that if it’s possible (as stipulated in this example) to predict accurately at T1 what I’m going to do at T2, then there’s no new salient information added to the system after T1, so it’s as reasonable to talk about Omega’s behavior at T1 being determined by the state of the world at T1 as it is to talk about it as being determined retroactively by the state of the world at T2.
(Perplexed has since then pointed out that the second formulation is simpler in some sense, and therefore potentially useful, which I accept.)
That being said… as Perplexed articulates well here, it’s hard to understand the purpose of decision theory in the first place from a compatibilist or determinist stance.
The second formulation is simpler, but then leads to absurdities such as counterfactual mugging. This is a failure of the theory.
If you don’t think so, try a counterfactual mugging on everyday people, and then try it at a LessWrong meeting. Which group do you think will be more likely to come out ahead, in this practical example?
As it says on the wiki:
The Less Wrong meeting, of course. I’m no Omega, but I’m smart enough to predict none of the regular people will take the deal, and most of the Less Wrongers will. That means I won’t give any money to any everyday people, but after the coin flip I’ll be handing out a whole bunch of suitcases with $10000 to the Less Wrongers (while also collecting a few hundred dollar bills). The average person in the Less Wrong meeting will come out $4950 richer than the person on the street.
If you mean I should do the second part, the part where I take the money, but not the first part, then it’s no longer a counterfactual mugging. Then it’s just me lying to people in a particularly weird way. The Less Wrongers might do worse on the completely unrelated problem of whether they believe weird lies, but I don’t see much evidence for this.
You can’t “try a counterfactual mugging” unless you are Omega (or some other entity with a lot of money to throw away and some unusually and systematically accurate way of predicting people’s behaviour under counterfactual interventions).
And if you are… then those who are inclined to pay in counterfactual mugging will win more from it on average. That’s the whole point. If you accept the premises of the problem (Onega is honest and flipped a fair coin, etc.), paying really is the winningest thing to do.
The counterfactual mugging requires that the deal be offered by an entity that is known to be both perfectly honest and a perfect predictor. If Omega tries to counterfactually mug you, you should pay him. If I try to counterfactually mug you, paying up would be significantly less wise.
A sufficiently good decision theory should get both of those cases right.
No.
The entity doesn’t have to be perfect.
Nominally, decision theory is all about giving good advice to people who make decisions.
Now I am willing to entertain the idea that the free will of the decision maker is an illusion.
But my ‘willing suspension of disbelief’ goes all to hell when I ask myself: “Why does an illusion need good advice?”
Decision theory absolutely requires an assumption that the will is free in some sense. However, it does seem reasonable to consider the possibility that free decision making can be spread out in time.
Traditional game theory assumes that an agent freely chooses a set of preferences over states-of-the-world well in advance. Then, at decision time, he chooses an action so as to maximize the probability of reaching a desirable state of the world. Classical game theory and decision theory offer advice on that second free decision, but they don’t advise on those earlier free decisions which created the preference schedule. Perhaps they should.
Or, perhaps we need an additional theory, over and above game theory and decision theory, which will advise agents on how to set their preferences so as to take into account some of the side effects of those preferences. What do we call this new kind of normative theory? ‘Moral theory’, perhaps?
Um. Let me taboo some words (“free will,” “prediction”, “decision”) here and try again.
Let us suppose that at time T1 someone either commits murder (event E1a) or doesn’t (E1b), and at T2 I either spare the murderer (E2a) or don’t (E2b). (I don’t mean to suggest here that all combinations are possible.)
The original scenario seemed to presuppose that at T1 there is a fact of the matter about whether, given E1a, T2 contains E2a or E2b, and that some potential murderers are able to use that fact in their reasoning.
My understanding is that some people are saying we can therefore understand E2a|b to have some kind of “acausal” influence on E1a|b. (If that’s not true, then I’ve utterly misunderstood either the scenario or the conversation or both, which is entirely plausible.)
I agree with you that it’s useful to talk about what happens at T1 (or earlier)… that is, to advise about preference schedules. Indeed, as you suggest, it’s hard to see what the point of advising about anything else in this scenario would be.
But I don’t understand why it isn’t just as useful, to this end, to say that there exists at T1 some set of facts S describing the state of the world (including my mind), and that E2 a|b and E1 a|b both depend on S, as it is to say that E2a|b exerts an acausal atemporal influence on E1 a|b.
What does that second formulation buy us?
You ask me to compare two ways of saying something:
that there exists at T1 some set of facts S describing the state of the world (including my mind), and that E2a|b and E1a|b both depend on S
that E2a|b exerts an acausal atemporal influence on E1a|b
What, you ask, does that second formulation buy us? My answer, of course, is that the second is more concise and it avoids mentioning either T1 or S (good, among other reasons, because there might be multiple times T1, T2, and T3, as well as state vectors S1, S2, and S3 at those times). The second formulation is even more economical if you leave off that extraneous word “atemporal”.
What you pay for this economy is a prior climb up a pretty rugged learning curve. Is it worth it? It is hard to say at this point. However, I should point out the irony that I have cast myself in the role of a defender of all this “acausal” mumbo-jumbo; ironic because I usually play your role - a fierce skeptic of the local zeitgeist and defender of the old-fashioned, orthodox approach.
(nods) Well said; that makes sense. Thank you.
I like this explanation of what “acausal” means.
Wow, I think that I finally understand TDT (assuming that you accurately described it). Thanks!
Cool. Maybe you can explain it to me, then. ;)
I’m pretty sure that my sketch doesn’t capture all of TDT. Only a rationalization of the “acausal influence” aspect of it.
The only practical application of TDT that I’ve found is as an anti-procrastination argument. “If this excuse seems good enough to me, it’s probably going to seem about as good to me+15 minutes, so if I don’t get to work now I’m very unlikely to do so in 15 minutes either”. I still find the existence of significant acausal connections between different individuals to be pretty much claptrap.