I’m not really interested in decision theory. It is one of several fun things I like to think about. To demonstrate an extreme version of this attitude, I am thinking about a math problem right now. I know that there is a solution in the literature—someone told me. I do not plan to find that solution in the literature.
Now, I am more interested in getting the correct answer vs. finding the answer myself in decision theory than that. But the primary reason I think about decision theory is not because I want to know the answer. So if someone was like, “here’s a paper that I think contains important insights on this problem,” I’d read it, but if they were like, “here’s a bunch of papers written by a community whose biases you find personally annoying and do not think are conducive to solving this particular problem, some of which probably contain some insights,” I’ll be more wary.
It should be noted that I do agree with your point to some extent, which is why we are having this discussion.
Well presumably you find Nozick’s work, formulating Newcomb’s and Solomon’s problems insightful.
Indeed.
I suspect a number of things on that page are insightful solutions to problems you hadn’t considered.
That did not appear to be the case when I looked.
Keep in mind- this is the SEP page on Causal Decision Theory, not Newcomb’s problem or any other decision theory problem.
which you linked to because, AFAICT, it is one of only three SEP pages that mentions Newcomb’s Problem, two of which I have read the relevant parts of and one of which I will soon.
To see Eliezer’s insights into decision theory it really helps to read his paper, not just his blog posts.
To see that he has insights, you just need to read his blog posts, although to be fair many of the ideas get less than a lesswrong-length post of explanation.
Expecting all the insights on a subject to show up in an online encyclopedia article about an adjacent subject is unrealistic.
I’d expect the best ones to.
In general, I’m not at all equipped to give you a guided tour of the philosophical literature.
It seems like, once I exhaust your limited but easily-accessible knowledge, which seems like about now, I should look up philosophical decision theory papers at the same leisurely pace I think about decision theory. My university should have some sort of database.
From what I see on the SEP page ratification, in particular seems insightful and capable of doing some of the same things TDT does.
It seems like it does just the wrong thing to me. For example, it two-boxes on Newcomb’s problem.
However, the amount of sense it seems to make leads me to suspect that I don’t understand it. When I have time, I will read the appropriate paper(s?) until I’m certain I understand what he means.
The Death in Damascus/decision instability problem is something for TDT/UDT to address.
TDT and UDT as currently formulated would make the correct counterfactual prediction:
“If I go to Damascus, I’ll die, if I go to Aleppo, I die, if I use a source of bits that Death doesn’t have access to, I’ll live with probability 1⁄2.”
which avoids decision instability, and, in general, don’t let you consider your decisions in view of your decisions.
Concluding anything about philosopher’s insights when you haven’t read any papers and two days ago you weren’t aware there were any papers is a bit absurd.
I was aware of the existence of papers, and I knew some of the main ideas that were contained in them.
As far as I can tell you don’t know much at all about academic philosophy.
There is something about academic philosophy that is not conducive to coming to conclusions about problems and then moving on to other, harder problems, at nearly the rate many other academic disciplines do so. Clearly some of this is based on philosophy being hard, but some of it is also based on the collective irrationality of philosophers.
I don’t know as much as I should. I know some.
As for the minds behind the LW take on decision theory, I’m not sure what it is they’ve accomplished besides writing some insightful things about decision theory.
Writing up a large collection of true statements of philosophy that contains very few false statements of philosophy, while not much of an achievement, is an indicator of what I think is the right attitude, especially for problems like decision theory.
AI theory is also an enormous intuition pump for this type of problem.
I mean, christ consider the outside view!
Considering the outside view leads me to two conclusions:
You’re right.
The best way to make progress on DT is to, if possible, get our ideas published, thus allowing TDT and academic philosophy’s ideas to mingle and recombine into superior ideas in the minds of more than O(5) people. Alternately, if TDT sucks then attempting to do this will lead to the creation by academic philosophers of strong arguments why TDT sucks that will also help figure out the problem.
I believe my current planned actions WRT reading philosophy papers are sufficient to cover the outside and inside evidence for 1, and I”m trying to figure out if there are better strategies than Eliezer’s current one for 2 and what the costs are.
I’m not really interested in decision theory. It is one of several fun things I like to think about. To demonstrate an extreme version of this attitude, I am thinking about a math problem right now. I know that there is a solution in the literature—someone told me. I do not plan to find that solution in the literature.
Now, I am more interested in getting the correct answer vs. finding the answer myself in decision theory than that. But the primary reason I think about decision theory is not because I want to know the answer. So if someone was like, “here’s a paper that I think contains important insights on this problem,” I’d read it, but if they were like, “here’s a bunch of papers written by a community whose biases you find personally annoying and do not think are conducive to solving this particular problem, some of which probably contain some insights,” I’ll be more wary.
It should be noted that I do agree with your point to some extent, which is why we are having this discussion.
Indeed.
That did not appear to be the case when I looked.
which you linked to because, AFAICT, it is one of only three SEP pages that mentions Newcomb’s Problem, two of which I have read the relevant parts of and one of which I will soon.
To see that he has insights, you just need to read his blog posts, although to be fair many of the ideas get less than a lesswrong-length post of explanation.
I’d expect the best ones to.
It seems like, once I exhaust your limited but easily-accessible knowledge, which seems like about now, I should look up philosophical decision theory papers at the same leisurely pace I think about decision theory. My university should have some sort of database.
It seems like it does just the wrong thing to me. For example, it two-boxes on Newcomb’s problem.
However, the amount of sense it seems to make leads me to suspect that I don’t understand it. When I have time, I will read the appropriate paper(s?) until I’m certain I understand what he means.
TDT and UDT as currently formulated would make the correct counterfactual prediction:
“If I go to Damascus, I’ll die, if I go to Aleppo, I die, if I use a source of bits that Death doesn’t have access to, I’ll live with probability 1⁄2.”
which avoids decision instability, and, in general, don’t let you consider your decisions in view of your decisions.
I was aware of the existence of papers, and I knew some of the main ideas that were contained in them.
There is something about academic philosophy that is not conducive to coming to conclusions about problems and then moving on to other, harder problems, at nearly the rate many other academic disciplines do so. Clearly some of this is based on philosophy being hard, but some of it is also based on the collective irrationality of philosophers.
I don’t know as much as I should. I know some.
Writing up a large collection of true statements of philosophy that contains very few false statements of philosophy, while not much of an achievement, is an indicator of what I think is the right attitude, especially for problems like decision theory.
AI theory is also an enormous intuition pump for this type of problem.
Considering the outside view leads me to two conclusions:
You’re right.
The best way to make progress on DT is to, if possible, get our ideas published, thus allowing TDT and academic philosophy’s ideas to mingle and recombine into superior ideas in the minds of more than O(5) people. Alternately, if TDT sucks then attempting to do this will lead to the creation by academic philosophers of strong arguments why TDT sucks that will also help figure out the problem.
I believe my current planned actions WRT reading philosophy papers are sufficient to cover the outside and inside evidence for 1, and I”m trying to figure out if there are better strategies than Eliezer’s current one for 2 and what the costs are.