I was actually thinking to make a follow-up post like this. I basically agree.
Let’s talk about two kinds of choice:
choice in the moment
choice of what kind of agent to be
I think this is the main insight—depending on what you consider the goal of decision theory, you’re thinking about either (1) or (2) and they lead to conflicting conclusions. My implicit claim in the linked post is that when describing thought experiments like Newcomb’s Problem, or discussing decision theory in general, people appear to be referring to (1), at least in classical decision theory circles. But on LessWrong people often switch to discussing (2) in a confusing way.
the core problem in decision theory is reconciling these various cases and finding a theory which works generally
I don’t think this is a core problem because in this case it doesn’t make sense to look for a single theory that does best at two different goals.
the core problem in decision theory is reconciling these various cases and finding a theory which works generally
My bad for being unclear. What I meant to convey here was:
I tend to think that decision theory should be about what kind of decision making algo an agent should implement
Given this Newcomb’s problem is still interesting and useful to talk about, even if you remove the “paradox” aspect
Agree that insofar as decision theory asks two different questions the answers will probably be different and looking for a single theory which works for both isn’t wise.
I was actually thinking to make a follow-up post like this. I basically agree.
I think this is the main insight—depending on what you consider the goal of decision theory, you’re thinking about either (1) or (2) and they lead to conflicting conclusions. My implicit claim in the linked post is that when describing thought experiments like Newcomb’s Problem, or discussing decision theory in general, people appear to be referring to (1), at least in classical decision theory circles. But on LessWrong people often switch to discussing (2) in a confusing way.
I don’t think this is a core problem because in this case it doesn’t make sense to look for a single theory that does best at two different goals.
Agree on the first part 👍
On this
My bad for being unclear. What I meant to convey here was:
I tend to think that decision theory should be about what kind of decision making algo an agent should implement
Given this Newcomb’s problem is still interesting and useful to talk about, even if you remove the “paradox” aspect
Agree that insofar as decision theory asks two different questions the answers will probably be different and looking for a single theory which works for both isn’t wise.