There are many many interesting questions in decision theory, and “dimensions” along which decision theories can vary, not just the three usually discussed on LessWrong.
It would be interesting to get an overview of what these are. Or if that’s too hard to write down, and there are no ready references, what are your own interests in decision theory?
what is so special about the particular combination you mention
Furthermore, note that most philosophers probably do not share your intuitions
Agreed, but my intuitions don’t seem so unpopular outside academia or so obviously wrong that there should be so few academic philosophers who do share them.
I’m pretty sure most of them would e.g. pay in counterfactual mugging. (And I have not seen a good case for why it would be rational to pay.)
I’m not sure I wouldn’t pay either. I see it as more of an interesting puzzle than having a definitive answer. ETA: Although I’m more certain that we should build AIs that do pay. Is that also unclear to you? (If so why might we not want to build such AIs?)
I don’t mean to be snarky, but you could just be wrong about what the open problems are.
Yeah, I’m trying to keep an open mind about that. :)
With that said, I haven’t read all the posts you reference, so perhaps I should read those first.
Cool, I’d be interested in any further feedback when you’re ready to give them.
It would be interesting to get an overview of what these are. Or if that’s too hard to write down, and there are no ready references, what are your own interests in decision theory?
I’m not sure I wouldn’t pay either. I see it as more of an interesting puzzle than having a definitive answer. ETA: Although I’m more certain that we should build AIs that do pay. Is that also unclear to you? (If so why might we not want to build such AIs?)
Okay, interesting! I thought UDT was meant to pay in CM, and that you were convinced of (some version of) UDT.
On the point about AI (not directly responding to your question, to which I don’t have an answer): I think it’s really important to be clear about whether we are discussing normative, constructive or descriptive decision theory (using Elliott Thornley’s distinction here). For example, the answers to “is updatelessness normatively compelling?”, “should we build an updateless AI?” and “will some agents (e.g. advanced AIs) commit to being updateless?” will most likely come apart (it seems to me). And I think that discussions on LW about decision theory are often muddled due to not making clear what is being discussed.
(BTW this issue/doubt about whether UDT / paying CM is normative for humans is item 1 in the above linked post. Thought I’d point that out since it may not be obvious at first glance.)
And I think that discussions on LW about decision theory are often muddled due to not making clear what is being discussed.
Yeah I agree with this to some extent, and try to point out such confusions or make such distinctions when appropriate. (Such as in the CM / indexical values case.) Do you have more examples where making such distinctions would be helpful?
I wrote “I’m really not sure at this point whether UDT is even on the right track” in UDT shows that decision theory is more puzzling than ever which I think you’ve read? Did you perhaps miss that part?
Yes, missed or forgot about that sentence, sorry.
(BTW this issue/doubt about whether UDT / paying CM is normative for humans is item 1 in the above linked post. Thought I’d point that out since it may not be obvious at first glance.)
Thanks.
Do you have more examples where making such distinctions would be helpful?
I was mostly thinking about discussions surrounding what the “correct” decision theory, is whether you should pay in CM, and so on.
It would be interesting to get an overview of what these are. Or if that’s too hard to write down, and there are no ready references, what are your own interests in decision theory?
As I mentioned in the previous comment, it happens to solve (or at least seemed like a good step towards solving) a lot of problems I was interested in at the time.
Agreed, but my intuitions don’t seem so unpopular outside academia or so obviously wrong that there should be so few academic philosophers who do share them.
I’m not sure I wouldn’t pay either. I see it as more of an interesting puzzle than having a definitive answer. ETA: Although I’m more certain that we should build AIs that do pay. Is that also unclear to you? (If so why might we not want to build such AIs?)
Yeah, I’m trying to keep an open mind about that. :)
Cool, I’d be interested in any further feedback when you’re ready to give them.
Yeah, that would be too hard. You might want to look at these SEP entries: Decision Theory, Normative Theories of Rational Choice: Expected Utility, Normative Theories of Rational Choice: Rivals to Expected Utility and Causal Decision Theory. To give an example of what I’m interested in, I think it is really important to take into account unawareness and awareness growth (see §5.3 of the first entry listed above) when thinking about how ordinary agents should make decisions. (Also see this post.)
Okay, interesting! I thought UDT was meant to pay in CM, and that you were convinced of (some version of) UDT.
On the point about AI (not directly responding to your question, to which I don’t have an answer): I think it’s really important to be clear about whether we are discussing normative, constructive or descriptive decision theory (using Elliott Thornley’s distinction here). For example, the answers to “is updatelessness normatively compelling?”, “should we build an updateless AI?” and “will some agents (e.g. advanced AIs) commit to being updateless?” will most likely come apart (it seems to me). And I think that discussions on LW about decision theory are often muddled due to not making clear what is being discussed.
Thanks, will look into your references.
I wrote “I’m really not sure at this point whether UDT is even on the right track” in UDT shows that decision theory is more puzzling than ever which I think you’ve read? Did you perhaps miss that part?
(BTW this issue/doubt about whether UDT / paying CM is normative for humans is item 1 in the above linked post. Thought I’d point that out since it may not be obvious at first glance.)
Yeah I agree with this to some extent, and try to point out such confusions or make such distinctions when appropriate. (Such as in the CM / indexical values case.) Do you have more examples where making such distinctions would be helpful?
Yes, missed or forgot about that sentence, sorry.
Thanks.
I was mostly thinking about discussions surrounding what the “correct” decision theory, is whether you should pay in CM, and so on.