So, I think this post is pretty bad as a ‘comprehensive’ list of the open problems, or as ‘the rationality agenda’. All of the top answers (Wei, Scott, Brienne, Thrasymachus) add something valuable, but I’d be pretty unhappy if this was considered the canonical answer to “what is the research agenda of LW”, or our best attempt at answering that question (I think we can do a lot better). I think it doesn’t address many things I care about. Here’s a few examples:
What are the best exercises for improving your rationality? Fermi estimates, Thinking Physics, Calibration Training, are all good, but are there much better ones?
What are the best heuristics for how to fight Moloch? What are examples of ways in which we have sold our souls to Moloch?
What are practical heuristics for how to get in touch with the world in a way that is reality-revealing rather than reality-masking?
What challenges do we face as embedded agents, and how should we think about them?
(This one’s a bit weird) What is the best rationality advice in the utilitarianism/deontology/virtue ethics ontology?
For virtue ethics, right now we think that curiosity and caring about something intensely is key. Is there a different virtue we’re not noticing?
For deontology, we have rules like “hold off on proposing solutions” and “sit down by a clock for 5 minutes trying to solve a problem before giving up on it”. What are the most important rules for rationality?
For utilitarianism, we have ways to improve our precise modeling like “practice fermi estimates, solve thinking physics problems, do calibration training”. Are there other quantitative practises that improve our ability to bring ourselves and the world into alignment?
So I feel conflicted on the list. I think there’s lots of valuable ideas in it, but it doesn’t feel at all like something right now I’d want to give someone as our best list of the open problems. I think I might vote this between −1 and −3 at the minute.
(I notice I think I’d be pretty happy if the post title just changed to “What are some open problems in Human Rationality?”. I think then I’d vote at somewhere between +1 and +4.)
So I’m not sure I’d include this in the Best Of book in the first place. If I did, I agree it’d be pretty obviously wrong to imply that the list was comprehensive. I didn’t think that was implied by the post – if you ask a question, usually you don’t end up getting a comprehensive answer right away.
As a post on a live forum, I think it’s pretty obvious that this isn’t a comprehensive list – if it’s missing things, people are supposed to just add those things, and you should expect it to need updating over time.
In the case of a printed book, I’m not sure if the right thing is to change the title, or just make sure to say “here are some specific answers this question post got.” Either seems potentially fine to me.
I very much don’t think the title of the LessWrong post itself should change – it’s trying to ask a question, not spell out any particular expectation of an answer.
So, I think this post is pretty bad as a ‘comprehensive’ list of the open problems, or as ‘the rationality agenda’. All of the top answers (Wei, Scott, Brienne, Thrasymachus) add something valuable, but I’d be pretty unhappy if this was considered the canonical answer to “what is the research agenda of LW”, or our best attempt at answering that question (I think we can do a lot better). I think it doesn’t address many things I care about. Here’s a few examples:
What are the best exercises for improving your rationality? Fermi estimates, Thinking Physics, Calibration Training, are all good, but are there much better ones?
What are the best heuristics for how to fight Moloch? What are examples of ways in which we have sold our souls to Moloch?
What are practical heuristics for how to get in touch with the world in a way that is reality-revealing rather than reality-masking?
What challenges do we face as embedded agents, and how should we think about them?
(This one’s a bit weird) What is the best rationality advice in the utilitarianism/deontology/virtue ethics ontology?
For virtue ethics, right now we think that curiosity and caring about something intensely is key. Is there a different virtue we’re not noticing?
For deontology, we have rules like “hold off on proposing solutions” and “sit down by a clock for 5 minutes trying to solve a problem before giving up on it”. What are the most important rules for rationality?
For utilitarianism, we have ways to improve our precise modeling like “practice fermi estimates, solve thinking physics problems, do calibration training”. Are there other quantitative practises that improve our ability to bring ourselves and the world into alignment?
I also think that I don’t come away from the answer feeling like I “learned” something, in the way that I do from posts that set out big problems like Embedded Agency, Reality-Revealing and Reality-Masking Puzzles, and The Treacherous Path to Rationality. What Failure Looks Like is a great example of setting up a set of open problems by putting in the work to communicate them. (It’s focused on AI not humans so didn’t list it in the above list.)
So I feel conflicted on the list. I think there’s lots of valuable ideas in it, but it doesn’t feel at all like something right now I’d want to give someone as our best list of the open problems. I think I might vote this between −1 and −3 at the minute.
(I notice I think I’d be pretty happy if the post title just changed to “What are some open problems in Human Rationality?”. I think then I’d vote at somewhere between +1 and +4.)
So I’m not sure I’d include this in the Best Of book in the first place. If I did, I agree it’d be pretty obviously wrong to imply that the list was comprehensive. I didn’t think that was implied by the post – if you ask a question, usually you don’t end up getting a comprehensive answer right away.
As a post on a live forum, I think it’s pretty obvious that this isn’t a comprehensive list – if it’s missing things, people are supposed to just add those things, and you should expect it to need updating over time.
In the case of a printed book, I’m not sure if the right thing is to change the title, or just make sure to say “here are some specific answers this question post got.” Either seems potentially fine to me.
I very much don’t think the title of the LessWrong post itself should change – it’s trying to ask a question, not spell out any particular expectation of an answer.