Brillyant’s comment above basically gives the answer to this: beauty doesn’t provide as much long-term happiness as the ICVPI facotrs (individual character, values, preferences, and interests).
Happiness levels in our society are stagnating because materialist desires only provide short-term fulfillment. No matter what good thing happens to you (be it a promotion, inheritance, marrying “the love of your life”, …) your happiness might raise for a certain amount of time but then drop back to its initial level. (Evidence of this is provided in both the books I cited originally.) A dating site which works like online shopping is not just creepy, but also actively diminishing happiness because it offers to much random choice and too little help in connecting with people. Just look at the graph:
So in a way it seems to be the case that in order to lastingly raise your happiness it is basically the only way to change your preferences. Be more social. Be nice to people. Be less judgmental.
I am just starting to do this and although it works for me, I am not yet ready to explain it, and haven’t read enough to recommend and summarize. But Seligman’s “Authentic Happiness” is at least a start. And “Search inside yourself” is the right thing to validate how much your preference functions has been corrupted by unhelpful external factors.
Hi LWers,
I am Robert and I am going to change the world. Maybe just a little bit, but that’s ok, since it’s fun to do and there’s nothing else I need to do right now. (Yay for mini-retirements!)
I find some of the articles here on LW very useful, especially those on heuristics and biases, as well as material on self-improvement although I find it quite scattered among loads of way to theoretic stuff. Does it seem odd that I have learned much more useful tricks and gained more insight from reading HPMOR than from reading 30 to 50 high-rated and “foundational” articles on this site? I am sincerely sad that even the leading rationalists on LW seem to struggle getting actual benefits out of their special skills and special knowledge (Yvain: Rationality is not that great; Eliezer: Why aren’t “rationalists” surrounded by a visible aura of formidability?) and I would like to help them change that.
My interest is mainly in contributing more structured, useful content and also to band together with fellow LWers to practice and apply our rationalist skills. As a stretch goal I think that we could pick someone really evil as our enemy and take them down, just to show our superiority. Let me stress that I am not kidding here. If rationality really counts for something (other than being good entertainment for sciency types and sci-fi lovers), then we should be able to find the right leverages and play out a great plot which just leaves everyone gasping “shit!” And then we’ll have changed the world, because people will start taking rationality serious.
Let me send out a warm “thank you” to you all for welcoming me in your rationalist circles!