My gains from LessWrong have come in several roughly distinct steps, all of which have come as I’ve been working my way through the Sequences. (Taking notes has really helped me digest and cement the information.)
1) Internalizing that there is a real world out there, and like Louie said, that ideas can be right or wrong. Making beliefs pay rent, referents and references, etc. A perspective on beliefs that they should accurately reflect the world and be used for achieving things I care about; all else is fluff. That every correct map should agree with every other, so that life does not seem such a disconnected jumble of different domains. Overall these kinds of insights really helped to give focus to my thoughts and clear out the clutter of my mind.
2) Having a conception of what beliefs should do, LessWrong helps me be aware of and combat various biases that interfere with the formation of accurate beliefs, and with taking coherent action based on those beliefs. I’ve made large gains here, but of course I’m not finished.
3) Forming a coherent, productive, happy me. Bootstrapping and snowballing effects. As I learn more, I seek out more good information, better. On this point, see Anna Salamon’s posts going back to “Humans are not automatically strategic.” The book “The Art Of Learning” by Josh Waitzkin has been immensely helpful. Learning about Cognitive Behavioral Therapy (this book is good) has been very helpful in being empirical and rational about the self. I believe this is basically the material of the Luminosity sequence, though I read those posts some time ago and should probably review them.
There’s far too much to go into specifically, but the transformation has been huge, and continues. When conversing with non-rationalists, arguments feel like a match between Bruce Lee and some guy off the street. It’s not that I have any more raw intellectual power than I had before, but my set of tools/training has improved tremendously. Unfortunately without being a rationalist, a non-rationalist doesn’t (much) realize the extent to which they’re outmatched, and indeed there is seldom a point to “beating someone.” Instead you realize that even a poorly argued-position can be correct, looking out for points you may have missed, and perhaps try and introduce a few concepts. It feels like I’m working at a level above most people, that the conversation is a different thing to me and them; it’s not like I can just tell them all this. I discovered LessWrong through an interest in existential risk and at first it seemed kind of boring, not very useful, this weird academic exercise. I wish I could convey to more people how helpful it’s been, and the extent to which I didn’t know what I didn’t know.
(A note on the community: I think it’s great that it’s here, and I think that some really great material has been produced, and continues to be produced, beyond the “core” material by Eliezer. That said, I almost never read comments and I only read front-page, promoted posts; the return on time for reading anything else doesn’t seem great enough right now, compared to my other work and studies. Just to give an idea on how I’m using LessWrong.)
Shannon really is a long-time and hugely contributive rationalist, hosting a ton of meetups in a great space and making a lot of other contributions as well. Thumbs up.