How does the goal of acquiring self-knowledge (for current humans) relate to the goal of happiness (insofar as such a goal can be isolated)?
If one aimed to be as rational as possible, how would this help someone (today) become happy? You have suggested (in conversation) that there might be a tradeoff such that those who are not perfectly rational might exist in an “unhappy valley”. Can you explain this phenomenon, including how one could find themselves in such a valley (and how they might get out)? How much is this term meant to indicate an analogy with an “uncanny valley”?
Less important, but related: What self-insights from hedonic/positive psychology have you found most revealing about people’s ability to make choices aimed at maximizing happiness (eg limitations of affective forecasting, paradox of choice, impact of upward vs. downward counterfactual thinking on affect, mood induction and creativity/cognitive flexibility, etc.
(I feel these are sufficiently intertwined to constitute one general question about the relationship between self-knowledge and happiness.)
While I find I have benefitted a great deal from reading posts on OB/LW, I also feel that, given the intellectual abilities of the people involved, the site does not function as an optimally effective way to acquire the art of rationality. I agree that the wiki is a good step in the right direction, but if one of the main goals of LW is to train people to think rationally, I think LW could do more to provide resources for allowing people to bootstrap themselves up from wherever they are to master levels of rationality.
So I ask: What are the optimal software, methods, educational tools, problem sets, etc. the community could provide to help people notice and root out the biases operating in their thinking. The answer may be sources already extant, but I have a proposal.
Despite being a regular reader of OB/LW, I still feel like a novice at the art of rationality. I realize that contributing one’s ideas is an effective way to correct one’s thinking, but I often feel as though I have all these intellectual sticking points which could be rooted out quite efficiently—if only the proper tools were available. As far as my own learning methods go, assuming a realistic application of current technology, I would love something like the following:
An interactive (calibrated to respond to learner’s demonstrated level of ability—similar to the GRE) test with a set of 1000+ problems, wherein I could detect the biases operating in my thinking as I approach given questions and problems. Using such a technique, I believe I could train myself up to the point where I could more closely approximate what I remember Eliezer somewhere saying is going on when he approaches an argument: his brain is cycling through possible biases almost as automatically as it is controlling his autonomic nervous system.
[In terms of convenience, an added bonus would be to be able to look at questions through one of the standard flashcard applications available on the iphone (or other devices), so I could look at, say, a few (or a few dozen) questions whenever the urge struck me. I dream of such a tool someday even incorporating SuperMemo-type capabilities, wherein even experts are able to keep their knowledge fresh by having questions reappear based on optimal strategies for obviating long-term degradation of memories. I am interested in helping to develop such a learning tool.]
I welcome any input about how to proceed with such a plan. Although I am a PhD candidate/adjunct professor, I don’t know what the optimal technology for such a project would be. It does seem, though, that the technical demands necessary to get such a project off the ground need not be imposing.
Once such a project got off the ground, I believe the community could come together to provide effective questions and answers. As I see it, it would neither be necessary nor desirable for such a project to be created by a single person.
I believe there are many people for whom this could project could be valuable. We might find that, were such a tool to be implemented, at the very least, it might raise the level of discourse on LW. Beyond that, who knows. Thanks for your suggestions.