That’s a good way of describing how the difference in my own thinking felt—when I was Christian I had enough of a framework to try to do things, but they weren’t really working. (It’s not a very good framework for working toward utilitarian values in.) Then I bumbled around for a couple years without much direction. LW gave me a framework again, and it was one that worked a lot better for my goals.
I’m not sure I can say the same thing about other people, though, so we might not be talking about the same thing. (Though I tend not to pay as much attention to the intelligence or “level” of others as much as most people seem to, so it might just be that.)
The one improvement that I’m fairly certain I can contribute to lesswrong/HPMOR/etc is getting better at morality. First, being introduced to and convinced up utilitarianism helped me get a grip on how to reason about ethics. Realizing that morality and “what I want the world to be like, when I’m at my best” are really similar, possibly the same thing, was also helpful. (And from there, HPMOR’s slytherins and the parts of objectivism that EAs tend to like were the last couple ideas I needed to learn how to have actual self esteem.)
But as to the kinds of improvements you’re interested in. I’m better at thinking strategically, often just from using some estimation in decision making. (If I built this product, how many people would I have to sell it to at what price to make it worth my time? Often results in not building the thing.) But the time since I discovered lesswrong included my last two years of college and listening to startup podcasts to cope with a boring internship, so it’s hard to attribute credit.
My memory isn’t better, but I haven’t gone out of my way to improve it. I’m pretty sure that programming and reading about programming are much better ways of improving at programming, than reading about rationality is. The sanity waterline is already pretty high in programming, so practicing and following best practices is more efficient than trying to work them out yourself from first principles.
It didn’t surprise me at all to see that someone had made a post asking this question. The sequences are a bit over-hyped, in that they suggest that rationality might make the reader a super-human and then it usually doesn’t happen. I think I still got a lot of useful brain-tools from them, though. It’s like a videogame that was advertiesd as the game to end all games, and then it turns out to just be a very good game with a decent chance of becoming a classic. (For the record, my expectations didn’t go quite that high, that I can remember, but it’s not surprising that some peoples’ do. It’s possible mine did and I just take disappointment really well.)