That’s new info to me. I wouldn’t consider that winning though. First, because that’s lower than the general population. And 2nd because I am in a long term relationship, but don’t consider myself to be winning (I have areas where I want improvement and don’t see meaningful growth from my efforts).
I’m not sure what you mean by “winning” broadly, I thought it was just getting a girlfriend or something. Successfully improving in some target area? Literally I was expecting this post to be about an AI arms race or something, apparently it’s just calling all rationalists losers at an undefined contest.
Ok, that’s fair—I didn’t define my terms here and am guilty of “Expecting Short Inferential Distances”. (I’ve now edited the post to add some background). By winning I was referencing this post where “winning” is defined as gaining utility, aka achieving your goals, whatever those goals may be.
As I concluded in my case, the issue here isn’t that rationalists aren’t winning at all, but that to my (limited) knowledge, they aren’t achieving their goals as much as I would have predicted. Anyone who only predicted single digit percentage improvement from learning about rationality probably doesn’t have anything to explain. But those of us who expected rationalism to produce large and obviously significant gains or expected rationalists to become known for their success across domains, do have something to explain.
OK the thesis makes sense. Like, you should be able to compare “people generally following rationalist improvements methods” and “people doing some other thing” and find an effect.
It might have a really small effect size across rationalism as a whole. And rationalism might have just converged to other self-improvement systems. (Honestly, if your self-improvement system is just “results that have shown up in 3 unrelated belief systems” you would do okay.
It might also be hard to improve, or accelerate, winningness in all of life by type 2 thinking. Then what are we doing when we’re type 2 thinking and believe we’re improving, idk. Good questions, I guess.
That’s new info to me. I wouldn’t consider that winning though. First, because that’s lower than the general population. And 2nd because I am in a long term relationship, but don’t consider myself to be winning (I have areas where I want improvement and don’t see meaningful growth from my efforts).
I’m not sure what you mean by “winning” broadly, I thought it was just getting a girlfriend or something. Successfully improving in some target area? Literally I was expecting this post to be about an AI arms race or something, apparently it’s just calling all rationalists losers at an undefined contest.
Ok, that’s fair—I didn’t define my terms here and am guilty of “Expecting Short Inferential Distances”. (I’ve now edited the post to add some background). By winning I was referencing this post where “winning” is defined as gaining utility, aka achieving your goals, whatever those goals may be.
As I concluded in my case, the issue here isn’t that rationalists aren’t winning at all, but that to my (limited) knowledge, they aren’t achieving their goals as much as I would have predicted. Anyone who only predicted single digit percentage improvement from learning about rationality probably doesn’t have anything to explain. But those of us who expected rationalism to produce large and obviously significant gains or expected rationalists to become known for their success across domains, do have something to explain.
OK the thesis makes sense. Like, you should be able to compare “people generally following rationalist improvements methods” and “people doing some other thing” and find an effect.
It might have a really small effect size across rationalism as a whole. And rationalism might have just converged to other self-improvement systems. (Honestly, if your self-improvement system is just “results that have shown up in 3 unrelated belief systems” you would do okay.
It might also be hard to improve, or accelerate, winningness in all of life by type 2 thinking. Then what are we doing when we’re type 2 thinking and believe we’re improving, idk. Good questions, I guess.