I think the main problem with ill-defined questions is that they don’t sufficiently constrain the answer space: you end up with people arguing over multiple proposed answers and no clear way to determine which is right. Replacing them with crisp technical questions can be useful but the ill-defined questions tend not to fully constrain how you’re supposed to translate them into crisp technical questions either.
An approach that I’ve found helpful is to gather many ill-defined questions in the same field and try to find some single insight (or a small set of related insights) that answers all of them simultaneously. While each question on its own may not narrow down the answer space down to a single point, the whole collection has a much better chance of doing so.
To illustrate this, compare philosophers who tries to solve individual seemingly crisp toy problems within anthropic reasoning, such as the Sleeping Beauty Problem, to those who worked on anthropic reasoning in general, to my attempt to solve many ill-defined problems simultaneously with UDT.
This isn’t to deny that there are lots of other examples where “hacking around the edges” or “pushing into the shadows” did work, but one should be careful not to elevate the heuristic to some sort of dogma, even to the extent of having someone be in charge of reminding others to not “bite too much”.
Don’t you think that others working on Sleeping Beauty, absent-minded driver, Parfit’s hitchhiker etc. helped pave the way for UDT by providing a list of questions for UDT to answer?
I mean, I’m not saying that one shouldn’t try to do better than the people who worked on all these problems (I might even be tempted to agree with what Jeffreyssai might say on the subject, though I expect you wouldn’t go that far) but it seems like even in a reasonably efficient way to approach the problem, “searching under the streetlight” by playing with some crisply formulated problems may help pave the way to deeper answers.
(In particular, I think that MIRI’s current research strategy is not-completely-crazy in starting under the streetlight in various respects; e.g., I’d be rather surprised if the modal agent formulation turned out to be useful as is for FAI, but I do think there’s a reasonable chance that it will help pave the way to deeper insights; and I agree with you that it may well turn out that probability is the wrong tool for handling logical uncertainty, but I feel that trying to use probability and seeing what results we can get is an obviously useful thing to do; and I think that it’s sufficiently likely that diagonalization problems will bite a wide range of attempts to handle logical uncertainty that I think working on workarounds to diagonalization makes sense to do in parallel with work on logical uncertainty, rather than trying to solve logical uncertainty first.)
Keep in mind that generally I advocate “Explore multiple approaches simultaneously” and “Trust your intuitions, but don’t waste too much time arguing for them”. Sometimes I do feel obligated to explain why I’m not as excited about some research direction as might be expected given my interests (and in the case of “probabilistic reflection” there’s the additional issue that I’m having trouble making intuitive sense of what the formalism is saying), but I don’t mean to discourage other people from exploring their ideas if they still think it’s worthwhile after hearing what I have to say.
Don’t you think that others working on Sleeping Beauty, absent-minded driver, Parfit’s hitchhiker etc. helped pave the way for UDT by providing a list of questions for UDT to answer?
I’m certainly not disputing that having those questions available was helpful, but just want to point out that there seems to be a danger where people focus on these relatively “crisp” problems too much, think they have solutions, and then argue over them endlessly, where they might have made better progress by zooming out and looking at the bigger picture. If you consider the dozens of academic papers published on the Sleeping Beauty, I don’t think the majority of them (i.e., beyond the first few) can be said to have helped pave the way for UDT.
Keep in mind that generally I advocate “Explore multiple approaches simultaneously” and “Trust your intuitions, but don’t waste too much time arguing for them”.
Fair enough!
(Re your last paragraph, it sounds like we’re in pretty perfect agreement about the usefulness of previous research. I suppose that upthread, you were saying “these people were following a streetlight/shadow strategy and it didn’t actually work” and I was saying “retrospectively, it looks like the correct strategy would have been to first explore some anthropic problems and then try to find a common answer to all of them, which sounds like it can be described as starting under the streetlight, then moving into the shadows”. So it sounds like we agree about the actual subject matter and any apparent disagreement is either due to talking about different things or due to disagreement about how to best apply the metaphor to the example, so it looks like there’s nothing that would actually be useful to debate. Cool! :-))
I wish I was better acquainted with the history of ideas. Certainly there are insights that in retrospect are so broadly useful that they must have resolved many seemingly separate confusions when they were first developed, for example logic, Bayesian updating, expected utility maximization, computation as a mathematical abstraction, information theory. But I’m not sure how their inventors came up with them. Were they were deliberately seeking to solve multiple problems with a single insight, or at least had the multiple relevant problems in the back of their minds? Maybe somebody more familiar with the history can help with the answer?
It’s true, sometimes it is possible to have an insight which solves multiple weakly related problems at once, but it’s very rare and tends to require a paradigm change, like the Einstein’s willingness to discard the fixed background space on which everything else happens. But this is basically the difference between art and craft. If you want to have systematic progress, you hack around the edges. It is certainly a good thing to occasionally try to bite through the problem as a whole if you have a flash of inspiration. What is not OK is to get stuck with a mouthful unable to chew through and unwilling to spit it out. Gah, metaphors. I am not qualified to judge whether your UDT solves every problem you say it does, as I have a strong aversion to anthropics, due to their poor testability, but it does not seem like a paradigm shift to me.
I think the main problem with ill-defined questions is that they don’t sufficiently constrain the answer space: you end up with people arguing over multiple proposed answers and no clear way to determine which is right. Replacing them with crisp technical questions can be useful but the ill-defined questions tend not to fully constrain how you’re supposed to translate them into crisp technical questions either.
An approach that I’ve found helpful is to gather many ill-defined questions in the same field and try to find some single insight (or a small set of related insights) that answers all of them simultaneously. While each question on its own may not narrow down the answer space down to a single point, the whole collection has a much better chance of doing so.
To illustrate this, compare philosophers who tries to solve individual seemingly crisp toy problems within anthropic reasoning, such as the Sleeping Beauty Problem, to those who worked on anthropic reasoning in general, to my attempt to solve many ill-defined problems simultaneously with UDT.
This isn’t to deny that there are lots of other examples where “hacking around the edges” or “pushing into the shadows” did work, but one should be careful not to elevate the heuristic to some sort of dogma, even to the extent of having someone be in charge of reminding others to not “bite too much”.
Don’t you think that others working on Sleeping Beauty, absent-minded driver, Parfit’s hitchhiker etc. helped pave the way for UDT by providing a list of questions for UDT to answer?
I mean, I’m not saying that one shouldn’t try to do better than the people who worked on all these problems (I might even be tempted to agree with what Jeffreyssai might say on the subject, though I expect you wouldn’t go that far) but it seems like even in a reasonably efficient way to approach the problem, “searching under the streetlight” by playing with some crisply formulated problems may help pave the way to deeper answers.
(In particular, I think that MIRI’s current research strategy is not-completely-crazy in starting under the streetlight in various respects; e.g., I’d be rather surprised if the modal agent formulation turned out to be useful as is for FAI, but I do think there’s a reasonable chance that it will help pave the way to deeper insights; and I agree with you that it may well turn out that probability is the wrong tool for handling logical uncertainty, but I feel that trying to use probability and seeing what results we can get is an obviously useful thing to do; and I think that it’s sufficiently likely that diagonalization problems will bite a wide range of attempts to handle logical uncertainty that I think working on workarounds to diagonalization makes sense to do in parallel with work on logical uncertainty, rather than trying to solve logical uncertainty first.)
Keep in mind that generally I advocate “Explore multiple approaches simultaneously” and “Trust your intuitions, but don’t waste too much time arguing for them”. Sometimes I do feel obligated to explain why I’m not as excited about some research direction as might be expected given my interests (and in the case of “probabilistic reflection” there’s the additional issue that I’m having trouble making intuitive sense of what the formalism is saying), but I don’t mean to discourage other people from exploring their ideas if they still think it’s worthwhile after hearing what I have to say.
I’m certainly not disputing that having those questions available was helpful, but just want to point out that there seems to be a danger where people focus on these relatively “crisp” problems too much, think they have solutions, and then argue over them endlessly, where they might have made better progress by zooming out and looking at the bigger picture. If you consider the dozens of academic papers published on the Sleeping Beauty, I don’t think the majority of them (i.e., beyond the first few) can be said to have helped pave the way for UDT.
Fair enough!
(Re your last paragraph, it sounds like we’re in pretty perfect agreement about the usefulness of previous research. I suppose that upthread, you were saying “these people were following a streetlight/shadow strategy and it didn’t actually work” and I was saying “retrospectively, it looks like the correct strategy would have been to first explore some anthropic problems and then try to find a common answer to all of them, which sounds like it can be described as starting under the streetlight, then moving into the shadows”. So it sounds like we agree about the actual subject matter and any apparent disagreement is either due to talking about different things or due to disagreement about how to best apply the metaphor to the example, so it looks like there’s nothing that would actually be useful to debate. Cool! :-))
How much of an outlier is UDT in this regard, do you think? What other examples can you think of?
I wish I was better acquainted with the history of ideas. Certainly there are insights that in retrospect are so broadly useful that they must have resolved many seemingly separate confusions when they were first developed, for example logic, Bayesian updating, expected utility maximization, computation as a mathematical abstraction, information theory. But I’m not sure how their inventors came up with them. Were they were deliberately seeking to solve multiple problems with a single insight, or at least had the multiple relevant problems in the back of their minds? Maybe somebody more familiar with the history can help with the answer?
It’s true, sometimes it is possible to have an insight which solves multiple weakly related problems at once, but it’s very rare and tends to require a paradigm change, like the Einstein’s willingness to discard the fixed background space on which everything else happens. But this is basically the difference between art and craft. If you want to have systematic progress, you hack around the edges. It is certainly a good thing to occasionally try to bite through the problem as a whole if you have a flash of inspiration. What is not OK is to get stuck with a mouthful unable to chew through and unwilling to spit it out. Gah, metaphors. I am not qualified to judge whether your UDT solves every problem you say it does, as I have a strong aversion to anthropics, due to their poor testability, but it does not seem like a paradigm shift to me.