The manifesto has a nice paragraph where Glymour lists the contributions of many mathematical philosophers. This might be relevant to UDT:
Philosophers and statisticians alike want to posit probabilities over sentences, but how would that work with a language adequate to science and mathematics, say first order logic? Haim Gaifman told us, and worked out the implications for what is and what is not learnable.
In your linked piece, you were talking about formal epistemology. Here you say “formal philosophy.” Is that a typo, or do you think that formal epistemology exhausts formal philosophy? (I would hope not the latter, since lots of formal work gets done in philosophy outside epistemology!)
This is pretty much unrelated but do you think maybe you could write a short post about the relevance of algorithmic probability for human rationality? There’s this really common error ’round these parts where people say a hypothesis (e.g. God, psi, etc) is a prior unlikely because it is a “complex” hypothesis according to the universal prior. Obviously the “universal prior” says no such thing, people are just taking whatever cached category of hypotheses they think are more probable for other unmentioned reasons and then labeling that category “simple”, which might have to do with coding theory but has nothing to do with algorithmic probability. Considering this appeal to simplicity is one of the most common attempted argument stoppers it might benefit the local sanity waterline to discourage this error. Fewer “priors”, more evidence.
ETA: I feel obliged to say that though algorithmic probability isn’t that useful for describing humans’ epistemic states, it’s very useful for talking about FAI ideas; it’s basically a tool for transforming indexical information about observations into logical information about programs and also proofs thanks to the Curry—Howard isomorphism, which is pretty cool, among other reasons it’s cool.
Thanks. I actually found your amendment more enlightening. Props again for your focus on the technical aspects of rationality, stuff like that is the saving grace of LW.
The manifesto has a nice paragraph where Glymour lists the contributions of many mathematical philosophers. This might be relevant to UDT:
Yup. This is why I was so surprised in January 2011 that Less Wrong had never before mentioned formal philosophy, which is the branch of philosophy most relevant to the open research problems of Friendly AI. See, for example, Self-Reference and the Acyclity of Rational Choice or Reasoning with Bounded Resources and Assigning Probabilities to Arithmetical Statements.
Thanks for the links. I just read those two papers and they don’t seem to be saying anything new to me :-(
In your linked piece, you were talking about formal epistemology. Here you say “formal philosophy.” Is that a typo, or do you think that formal epistemology exhausts formal philosophy? (I would hope not the latter, since lots of formal work gets done in philosophy outside epistemology!)
Formal epistemology is a subfield within formal philosophy, probably the largest.
Larger than logic? Hmm … maybe you’re thinking about “formal philosophy” in a way that I am unfamiliar with.
This is pretty much unrelated but do you think maybe you could write a short post about the relevance of algorithmic probability for human rationality? There’s this really common error ’round these parts where people say a hypothesis (e.g. God, psi, etc) is a prior unlikely because it is a “complex” hypothesis according to the universal prior. Obviously the “universal prior” says no such thing, people are just taking whatever cached category of hypotheses they think are more probable for other unmentioned reasons and then labeling that category “simple”, which might have to do with coding theory but has nothing to do with algorithmic probability. Considering this appeal to simplicity is one of the most common attempted argument stoppers it might benefit the local sanity waterline to discourage this error. Fewer “priors”, more evidence.
ETA: I feel obliged to say that though algorithmic probability isn’t that useful for describing humans’ epistemic states, it’s very useful for talking about FAI ideas; it’s basically a tool for transforming indexical information about observations into logical information about programs and also proofs thanks to the Curry—Howard isomorphism, which is pretty cool, among other reasons it’s cool.
I already have a post about that. Unfortunately I screwed up the terminology and was rightly called on it, but the point of the post is still valid.
Thanks. I actually found your amendment more enlightening. Props again for your focus on the technical aspects of rationality, stuff like that is the saving grace of LW.