This is one of those areas where I think people on LessWrong would benefit from reading more academic philosophy.
In my view, it’s a significant and under-appreciated milestone that we’ve reduced the original Problem of Induction to the problem of justifying Occam’s Razor.
It’s been a while since I took academic philosophy classes, but I’m pretty sure the Problem of Induction and the Problem of Justifying Occam’s Razor have been known to be basically the same thing for at least a century. When they were taught to me in undergrad, they were presented that way, IIRC. When I taught my undergrads the problem, that’s certainly how I presented it.
I do agree that it constitutes a significant milestone of intellectual progress though!
As for the counting argument, that’s less well known. When I first heard it in undergrad (in the context of learning about Solomonoff Induction) it struck me as another important milestone of intellectual progress, that doesn’t solve the problem but probably brings us closer. I felt the same way about one of the things that makes solomonoff induction work (how hypotheses that are simpler have “more look-alikes” and thus when grouped together with look-alikes more measure) and subjective Bayesianism. Finally, I’m excited about something proven in Logical Induction—that logical inductors have the Occam Property. I still haven’t got around to understanding it deeply and thinking about what it means though. All in all I remain optimistic that the problem of induction has a solution.
To clarify, what I think is underappreciated (and what’s seemingly being missed in Eliezer’s statement about his belief that the future is similar to the past), isn’t that justifying an Occamian prior is necessary or equivalent to solving the original Problem of Induction, but that it’s a smaller and more tractable problem which is sufficient to resolve everything that needs to be resolved.
Edit: I’ve expanded on the Problem of Occam’s Razor section in the post:
In my view, it’s a significant and under-appreciated milestone that we’ve reduced the original Problem of Induction to the problem of justifying Occam’s Razor. We’ve managed to drop two confusing aspects from the original PoI:
We don’t have to justify using “similarity”, “resemblance”, or “collecting a bunch of confirming observations”, because we know those things aren’t key to how science actually works.
We don’t have to justify “the future resembling the past” per se. We only have to justify that the universe allows intelligent agents to learn probabilistic models that are better than maximum-entropy belief states.
We don’t have to justify using “similarity”, “resemblance”
I think you still do. In terms of induction, you still have the problem of grue and bleen. In terms of Occams Razor, it’s the problem of which language a description needs to be simple in.
Justifying that blue is an a-priori more likely concept than grue is part of the remaining problem of justifying Occam’s Razor. What we don’t have to justify is the wrong claim that science operates based on generalized observations of similarity.
This is one of those areas where I think people on LessWrong would benefit from reading more academic philosophy.
It’s been a while since I took academic philosophy classes, but I’m pretty sure the Problem of Induction and the Problem of Justifying Occam’s Razor have been known to be basically the same thing for at least a century. When they were taught to me in undergrad, they were presented that way, IIRC. When I taught my undergrads the problem, that’s certainly how I presented it.
I do agree that it constitutes a significant milestone of intellectual progress though!
As for the counting argument, that’s less well known. When I first heard it in undergrad (in the context of learning about Solomonoff Induction) it struck me as another important milestone of intellectual progress, that doesn’t solve the problem but probably brings us closer. I felt the same way about one of the things that makes solomonoff induction work (how hypotheses that are simpler have “more look-alikes” and thus when grouped together with look-alikes more measure) and subjective Bayesianism. Finally, I’m excited about something proven in Logical Induction—that logical inductors have the Occam Property. I still haven’t got around to understanding it deeply and thinking about what it means though. All in all I remain optimistic that the problem of induction has a solution.
To clarify, what I think is underappreciated (and what’s seemingly being missed in Eliezer’s statement about his belief that the future is similar to the past), isn’t that justifying an Occamian prior is necessary or equivalent to solving the original Problem of Induction, but that it’s a smaller and more tractable problem which is sufficient to resolve everything that needs to be resolved.
Edit: I’ve expanded on the Problem of Occam’s Razor section in the post:
I think you still do. In terms of induction, you still have the problem of grue and bleen. In terms of Occams Razor, it’s the problem of which language a description needs to be simple in.
Justifying that blue is an a-priori more likely concept than grue is part of the remaining problem of justifying Occam’s Razor. What we don’t have to justify is the wrong claim that science operates based on generalized observations of similarity.