To quickly recap the history, people on LW noticed some clear issues with “updating” and “physicalist ontology” of the most popular decision theories at the time (CDT/EDT), and thought that switching to “updatelessness” and “logical/algorithmic ontology” was an obvious improvement. (I was the first person to put the two pieces together in an explicit formulation, but they were already being talked about / hinted at in the community.) Initially people were really excited because the resulting decision theories (UDT/FDT) seemed to solve a lot of open problems in one swoop, but then pretty quickly and over time we noticed more and more problems with UDT/FDT that seem to have no clear fixes.
So we were initially excited but then increasingly puzzled/confused, and I guess I was expecting at least some academics to follow a similar path, either through engagement with LW ideas (why should they be bothered that much by lack of academic publication?), or from independent invention. Instead academia seems to still be in a state similar to LW when I posted UDT, i.e., the ideas are floating in the air separately and nobody has put them together yet? (Or I guess that was the state of academia before FDT was published in an academic journal, so now the situation is more like some outsiders put the pieces together in a formal publication, but still no academic is following a similar path as us.)
I guess it’s also possible that academia sort of foresaw or knew all the problems that we’d eventually find with UDT/FDT and that’s why they didn’t get excited in the first place. I haven’t looked into academic DT literature in years, so you’re probably more familiar with it. Do you know if they’re puzzled/confused by the same problems that we are? Or what are they mostly working on / arguing about these days?
There are many many interesting questions in decision theory, and “dimensions” along which decision theories can vary, not just the three usually discussed on LessWrong. It’s not clear to me why (i) philosophers should focus on the dimensions you primarily seem to be interested in, and (ii) what is so special about the particular combination you mention (is there some interesting interaction I don’t know about maybe?). Furthermore, note that most philosophers probably do not share your intuitions: I’m pretty sure most of them would e.g. pay in counterfactual mugging. (And I have not seen a good case for why it would be rational to pay.) I don’t mean to be snarky, but you could just be wrong about what the open problems are.
I haven’t looked into academic DT literature in years, so you’re probably more familiar with it. Do you know if they’re puzzled/confused by the same problems that we are?
I wouldn’t say so, no. But I’m not entirely sure if I understand what the open problems are. Reading your list of seven issues, I either (i) don’t understand what you are asking, (ii) disagree with the framing/think the question is misguided, or (iii) think there is an obvious answer (which makes me think that I’m missing something). With that said, I haven’t read all the posts you reference, so perhaps I should read those first.
There are many many interesting questions in decision theory, and “dimensions” along which decision theories can vary, not just the three usually discussed on LessWrong.
It would be interesting to get an overview of what these are. Or if that’s too hard to write down, and there are no ready references, what are your own interests in decision theory?
what is so special about the particular combination you mention
Furthermore, note that most philosophers probably do not share your intuitions
Agreed, but my intuitions don’t seem so unpopular outside academia or so obviously wrong that there should be so few academic philosophers who do share them.
I’m pretty sure most of them would e.g. pay in counterfactual mugging. (And I have not seen a good case for why it would be rational to pay.)
I’m not sure I wouldn’t pay either. I see it as more of an interesting puzzle than having a definitive answer. ETA: Although I’m more certain that we should build AIs that do pay. Is that also unclear to you? (If so why might we not want to build such AIs?)
I don’t mean to be snarky, but you could just be wrong about what the open problems are.
Yeah, I’m trying to keep an open mind about that. :)
With that said, I haven’t read all the posts you reference, so perhaps I should read those first.
Cool, I’d be interested in any further feedback when you’re ready to give them.
It would be interesting to get an overview of what these are. Or if that’s too hard to write down, and there are no ready references, what are your own interests in decision theory?
I’m not sure I wouldn’t pay either. I see it as more of an interesting puzzle than having a definitive answer. ETA: Although I’m more certain that we should build AIs that do pay. Is that also unclear to you? (If so why might we not want to build such AIs?)
Okay, interesting! I thought UDT was meant to pay in CM, and that you were convinced of (some version of) UDT.
On the point about AI (not directly responding to your question, to which I don’t have an answer): I think it’s really important to be clear about whether we are discussing normative, constructive or descriptive decision theory (using Elliott Thornley’s distinction here). For example, the answers to “is updatelessness normatively compelling?”, “should we build an updateless AI?” and “will some agents (e.g. advanced AIs) commit to being updateless?” will most likely come apart (it seems to me). And I think that discussions on LW about decision theory are often muddled due to not making clear what is being discussed.
(BTW this issue/doubt about whether UDT / paying CM is normative for humans is item 1 in the above linked post. Thought I’d point that out since it may not be obvious at first glance.)
And I think that discussions on LW about decision theory are often muddled due to not making clear what is being discussed.
Yeah I agree with this to some extent, and try to point out such confusions or make such distinctions when appropriate. (Such as in the CM / indexical values case.) Do you have more examples where making such distinctions would be helpful?
I wrote “I’m really not sure at this point whether UDT is even on the right track” in UDT shows that decision theory is more puzzling than ever which I think you’ve read? Did you perhaps miss that part?
Yes, missed or forgot about that sentence, sorry.
(BTW this issue/doubt about whether UDT / paying CM is normative for humans is item 1 in the above linked post. Thought I’d point that out since it may not be obvious at first glance.)
Thanks.
Do you have more examples where making such distinctions would be helpful?
I was mostly thinking about discussions surrounding what the “correct” decision theory, is whether you should pay in CM, and so on.
To quickly recap the history, people on LW noticed some clear issues with “updating” and “physicalist ontology” of the most popular decision theories at the time (CDT/EDT), and thought that switching to “updatelessness” and “logical/algorithmic ontology” was an obvious improvement. (I was the first person to put the two pieces together in an explicit formulation, but they were already being talked about / hinted at in the community.) Initially people were really excited because the resulting decision theories (UDT/FDT) seemed to solve a lot of open problems in one swoop, but then pretty quickly and over time we noticed more and more problems with UDT/FDT that seem to have no clear fixes.
So we were initially excited but then increasingly puzzled/confused, and I guess I was expecting at least some academics to follow a similar path, either through engagement with LW ideas (why should they be bothered that much by lack of academic publication?), or from independent invention. Instead academia seems to still be in a state similar to LW when I posted UDT, i.e., the ideas are floating in the air separately and nobody has put them together yet? (Or I guess that was the state of academia before FDT was published in an academic journal, so now the situation is more like some outsiders put the pieces together in a formal publication, but still no academic is following a similar path as us.)
I guess it’s also possible that academia sort of foresaw or knew all the problems that we’d eventually find with UDT/FDT and that’s why they didn’t get excited in the first place. I haven’t looked into academic DT literature in years, so you’re probably more familiar with it. Do you know if they’re puzzled/confused by the same problems that we are? Or what are they mostly working on / arguing about these days?
There are many many interesting questions in decision theory, and “dimensions” along which decision theories can vary, not just the three usually discussed on LessWrong. It’s not clear to me why (i) philosophers should focus on the dimensions you primarily seem to be interested in, and (ii) what is so special about the particular combination you mention (is there some interesting interaction I don’t know about maybe?). Furthermore, note that most philosophers probably do not share your intuitions: I’m pretty sure most of them would e.g. pay in counterfactual mugging. (And I have not seen a good case for why it would be rational to pay.) I don’t mean to be snarky, but you could just be wrong about what the open problems are.
I wouldn’t say so, no. But I’m not entirely sure if I understand what the open problems are. Reading your list of seven issues, I either (i) don’t understand what you are asking, (ii) disagree with the framing/think the question is misguided, or (iii) think there is an obvious answer (which makes me think that I’m missing something). With that said, I haven’t read all the posts you reference, so perhaps I should read those first.
It would be interesting to get an overview of what these are. Or if that’s too hard to write down, and there are no ready references, what are your own interests in decision theory?
As I mentioned in the previous comment, it happens to solve (or at least seemed like a good step towards solving) a lot of problems I was interested in at the time.
Agreed, but my intuitions don’t seem so unpopular outside academia or so obviously wrong that there should be so few academic philosophers who do share them.
I’m not sure I wouldn’t pay either. I see it as more of an interesting puzzle than having a definitive answer. ETA: Although I’m more certain that we should build AIs that do pay. Is that also unclear to you? (If so why might we not want to build such AIs?)
Yeah, I’m trying to keep an open mind about that. :)
Cool, I’d be interested in any further feedback when you’re ready to give them.
Yeah, that would be too hard. You might want to look at these SEP entries: Decision Theory, Normative Theories of Rational Choice: Expected Utility, Normative Theories of Rational Choice: Rivals to Expected Utility and Causal Decision Theory. To give an example of what I’m interested in, I think it is really important to take into account unawareness and awareness growth (see §5.3 of the first entry listed above) when thinking about how ordinary agents should make decisions. (Also see this post.)
Okay, interesting! I thought UDT was meant to pay in CM, and that you were convinced of (some version of) UDT.
On the point about AI (not directly responding to your question, to which I don’t have an answer): I think it’s really important to be clear about whether we are discussing normative, constructive or descriptive decision theory (using Elliott Thornley’s distinction here). For example, the answers to “is updatelessness normatively compelling?”, “should we build an updateless AI?” and “will some agents (e.g. advanced AIs) commit to being updateless?” will most likely come apart (it seems to me). And I think that discussions on LW about decision theory are often muddled due to not making clear what is being discussed.
Thanks, will look into your references.
I wrote “I’m really not sure at this point whether UDT is even on the right track” in UDT shows that decision theory is more puzzling than ever which I think you’ve read? Did you perhaps miss that part?
(BTW this issue/doubt about whether UDT / paying CM is normative for humans is item 1 in the above linked post. Thought I’d point that out since it may not be obvious at first glance.)
Yeah I agree with this to some extent, and try to point out such confusions or make such distinctions when appropriate. (Such as in the CM / indexical values case.) Do you have more examples where making such distinctions would be helpful?
Yes, missed or forgot about that sentence, sorry.
Thanks.
I was mostly thinking about discussions surrounding what the “correct” decision theory, is whether you should pay in CM, and so on.