I continue to feel extremely confused by these posts. What the hell are people thinking when they say rationalists are supposed to win more?
The average income of long-term rationalists seems likely to be 5-10x their most reasonable demographic counterparts, largely driven by a bunch of outlier successes in entrepreneurship (Anthropic alone is around $100B in equity heavily concentrated in rationalists), as well as early investments in crypto.
Rationalists are probably happier than their demographic counterparts. The core ideas that rationalists identified as important, mostly around the crucial importance of AI, are now obvious to most of the rest of the world, and the single most important issue in history (the trajectory of the development of AGI) is shaped by rationalist priorities vastly vastly above population baseline.
What the hell do people want? I don’t get it. Like, I think it’s overall not clear to me that the whole AI safety and rationality thing is working out because maybe all the things we are doing are hastening the end of the world and not helping that much, but by naive measures of success, power and influence the rationalists are winning beyond what anyone I think was reasonably expecting, and far far above what any similar demographic group appears to be doing.
Rationalists are winning. Maybe it’s not enough because existential risk is still real. But man, I sure absolutely am not living a life where I feel like my social circle and extended community is failing to have an effect on the world, or ending up with a disappointing amount of wealth, influence, and power. I am worried we are shaping the trajectory of humanity towards worse things, but man, surely we don’t lack markers of conventional success.
I continue to feel extremely confused by these posts. What the hell are people thinking when they say rationalists are supposed to win more?
Scott’s comment linked in another comment here sums up the expectations at the time. I am not sure if a plain list like this gives a different impression, but note that my sentiment for the talk wasn’t that rationalists should win more. Rather, I wanted to say that their existing level of success was probably what you should expect.
The average income of long-term rationalists seems likely to be 5-10x their most reasonable demographic counterparts, largely driven by a bunch of outlier successes in entrepreneurship (Anthropic alone is around $100B in equity heavily concentrated in rationalists), as well as early investments in crypto.
I find myself questioning this in a few ways.
Who do you consider the most reasonable demographic counterparts? Part of what prompted me to give the talk in 2018 was that, where I looked, rationalist and rationalist-adjacent software engineers weren’t noticeably more successful than software engineers in general. Professional groups seem highly relevant to this comparison.
If we look at the income statistics in SSC surveys (graph and discussion), you see American-tech levels of income (most respondents are American and in tech), but not 5×–10×. It does depend on how you define “long-term rationalists”.
Why evaluate the success of rationalists as a group by an average that includes extreme outliers? This approach can take you some silly places. For example, if Gwern is right about Musk, all but one American citizen with bipolar disorder could have $0 to their name, and they’d still be worth $44k on average[1]. Can you use this fact to say that bipolar American citizens are doing well as a whole? No, you can’t.
The mistaken expectations that built up for the individual success of rationalists weren’t built on the VC model of rare big successes. “We’ll make you really high-variance, and some of you will succeed wildly” wasn’t how people thought about LW. (Again, I think they were wrong to think this at all. I am just explaining their apparent model.)
Rationalists are probably happier than their demographic counterparts.
It’s a tough call. The median life satisfaction score in the 2020 SSC survey (picked as the top search result) is 8 on a 1–10 scale; the “mood scale” is 7. But then a third of those who answered the relevant questions say they either have a diagnosis of or think they may have depression and anxiety. The most common anxiety scores are 3, 2, then 7. A fourth has seriously considered suicide. My holistic impression is that a lot of rationalists online suffer from depression and anxiety, which are anti-happiness.
The core ideas that rationalists identified as important, mostly around the crucial importance of AI, are now obvious to most of the rest of the world, and the single most important issue in history (the trajectory of the development of AGI) is shaped by rationalist priorities vastly vastly above population baseline.
I agree, some rationalist memes about AI have spread far and wide. Rationalist language like “motte and bailey” has entered the mainstream. It wasn’t the case in 2018[2], and I would want discussions about rationalist success today to acknowledge it. This is along the lines of long-term, collective (as opposed to individual) impact that Scott talks about in the comment.
Of course, Eliezer disagrees that the AI part constitutes a success and seems to think that the memes have been co-opted, e.g., AI safety for “brand safety”.
What the hell do people want?
I think they want superpowers, and some are (were) surprised rationality didn’t give them superpowers. By contrast, you think rationalists are individually quite successful for their demographics, and it’s fine. I think rationalists are about as successful as their demographics, and it’s fine.
According to Forbes Australia, Elon Musk’s net worth is $423 billion. Around 2.8% of the US population of approximately 342 million is estimated by the NIH to be bipolar, giving approximately 9.6 million people. 423 000 ÷ 9.6 = 44 062.5.
Why evaluate the success of rationalists as a group by an average that includes extreme outliers?
Because one of the whole central principles of the Rationalist and EA ethos is scope sensitivity. Almost everyone I know has pursued high-variance strategies because their altruistic/global goals give them much less diminishing returns.
Imagine trying to evaluate the success of Y-Combinator on the median valuation of a Y-Combinator startup. A completely useless exercise. Similarly evaluating the income of a bunch of rationalists pursuing if anything even higher-variance plans on the median outcome is just as useless.
Imagine trying to evaluate the success of Y-Combinator on the median valuation of a Y-Combinator startup. A completely useless exercise.
Surely this is dependent on whether you’re evaluating the success of Y-Combinator from the standpoint of Y-Combinator itself, or from the standpoint of “the public” / “humanity” / etc., or from the standpoint of a prospective entrant into YC’s program? It seems to me that you get very different answers in those three cases!
Sure, though I don’t think I understand the relevance? In this case, people were pursuing careers with high expected upside with relative risk-neutrality, and indeed in-aggregate they succeeded extremely enormously well at that (on conventional metrics, I generally think people did so at substantial moral cost, with a lot of money being made by building doomsday machines, but we can set that part aside for now).
It’s also not the case this resulted in a lot of poverty, as even people for whom the high-variance strategies didn’t succeed still usually ended up with high paying software developer jobs. Overall, the distribution of strategies seems to me indeed like it was well-chosen, with a median a few tens of thousands of dollars lower compared to people who just chose stable software engineering careers, and an average many millions higher, which makes sense given that people are trying to solve world-scale problems.[1]
Well, the relevance is just that from the standpoint of an individual prospective X, the expectation of the value of becoming an X is… not irrelevant, certainly, but also not the only concern or even the main concern; rather, one would like to know the distribution of possible values of becoming an X (and the median outcome is an important summary statistic of that distribution). This is true for most X.
So if I am a startup founder and considering entry into YC’s accelerator program, I will definitely want to judge this option on the basis of the median valuation of a YC startup, not the mean or the maximum or anything of that sort.
Similarly, if I am considering whether “being a rationalist” (or “joining the rationalist community” or “following the rationalists’ prescriptions for life” etc.), I will certainly judge this option on the basis of the median outcome. (Of course I will condition on my own demographic characteristics, and any other personal factors that I think may apply, but… not too much; extreme Inside View is not helpful here.)
Hopefully not the median! That seems kind of insane to me. I agree you will have some preference distribution over outcomes, but clearly “optimizing for the median” is a terrible decision-making process.
clearly “optimizing for the median” is a terrible decision-making process
Clearly, but that’s also not what I suggested, either prescriptively or descriptively. “An important axis of evaluation” is not the same thing as “the optimization target”.
My point is simple. You said that evaluating based on the median outcome is a “completely useless exercise”. And I am saying: no, it’s not only not useless, but in fact it’s more useful than evaluating based on the mean/expectation (and much more useful than evaluating only based on the mean/expectation), if you are the individual agent who is considering whether to do a thing.
(Optimizing for the mean is, of course, an even more terrible decision-making process. You presumably know this very well, on account of your familiarity with the FTX fiasco.)
EDIT: A rate limit on my comments?? What the hell is this?! (And it’s not listed on the moderation log page, either!)
I will definitely want to judge this option on the basis of the median valuation of a YC startup, not the mean or the maximum or anything of that sort.
This sure sounds to me like you said you would use it at the very least as the primary evaluation metric. I think my reading here is reasonable, but fine if you meant something else. I agree the median seems very reasonable as one thing to think about among other things.
EDIT: A rate limit on my comments?? What the hell is this?! (And it’s not listed on the moderation log page, either!)
Yep, we have downvote-based rate-limits. I think they are reasonable, though not perfect (and my guess is in this case not ideal, and also I expect your comments to get upvoted more and then for the rate limit to disappear).
I would like them to be listed on the moderation log page, but haven’t gotten around to it. You would be welcome to make a PR for that, or anyone else is, and we will probably also get around to it at some point.
Part of what prompted me to give the talk in 2018 was that, where I looked, rationalist and rationalist-adjacent software engineers weren’t noticeably more successful than software engineers in general.
How are you selecting your control group? if there’s a certain bar of success to be in the same room with you, then of course the groups don’t look different. My impression is that rationalists disproportionately work at tier 1 or 2 companies. And when they don’t, it’s more likely to be a deliberate choice.
I was comparing software engineers I knew who were and weren’t engaged with rationalist writing and activities. I don’t think they were strongly selected for income level or career success. The ones I met through college were filtered the fact they had entered that college.
My impression is that rationalists disproportionately work at tier 1 or 2 companies. And when they don’t, it’s more likely to be a deliberate choice.
It’s possible I underestimate how successful the average rationalist programmer is. There may also be regional variation. For example, in the US and especially around American startup hubs, the advantage may be more pronounced than it was locally for me.
While I think the rationalist folk are outperforming most other groups and individuals, I will sign on to Scott’s proposal to drop the slogan. “Rationalists should win” was developed in the context of a decision theory argument with philosophers who in my opinion had quite insane beliefs and thought that it was rational to choose to lose because of vague aesthetic reasons, and was not intended to connote fully general life advice. Of course I regularly lose games that I play. (Whereby “games” I also refer to real life situations well-modeled by game theory.)
Sure, but I feel like the actual conversation in all of these posts is about whether “the rationalist philosophy works as a tool for winning”, and at least measured by conventional success metrics, the answer is “yes, overwhelmingly so, as far as I can tell”. I agree there is an annoying word-game here that people play with a specific phrase that was intended to convey something else, but the basic question seems like one worth asking for every community one is part of, or philosophy one adopts.
My guess is the people asking such questions really mean “why don’t I win more, despite being a rationalist”, and their criticisms make much more sense as facts about them or mistakes they’ve made which they attribute to holding them back on winning.
“Do as well as Einstein?” Jeffreyssai said, incredulously. “Just as well as Einstein? Albert Einstein was a great scientist of his era, but that was his era, not this one! Einstein did not comprehend the Bayesian methods; he lived before the cognitive biases were discovered; he had no scientific grasp of his own thought processes. Einstein spoke nonsense of an impersonal God—which tells you how well he understood the rhythm of reason, to discard it outside his own field! He was too caught up in the drama of rejecting his era’s quantum mechanics to actually fix it. And while I grant that Einstein reasoned cleanly in the matter of General Relativity—barring that matter of the cosmological constant—he took ten years to do it. Too slow!”
“Too slow?” repeated Taji incredulously.
“Too slow! If Einstein were in this classroom now, rather than Earth of the negative first century, I would rap his knuckles! You will not try to do as well as Einstein! You will aspire to do BETTER than Einstein or you may as well not bother!”
See, when you put it like that, I think the reason rationalists don’t win as much as was expected is quite obvious: claims about the power of rationality were significant overpromises from the start.
I continue to feel extremely confused by these posts. What the hell are people thinking when they say rationalists are supposed to win more?
The average income of long-term rationalists seems likely to be 5-10x their most reasonable demographic counterparts, largely driven by a bunch of outlier successes in entrepreneurship (Anthropic alone is around $100B in equity heavily concentrated in rationalists), as well as early investments in crypto.
Rationalists are probably happier than their demographic counterparts. The core ideas that rationalists identified as important, mostly around the crucial importance of AI, are now obvious to most of the rest of the world, and the single most important issue in history (the trajectory of the development of AGI) is shaped by rationalist priorities vastly vastly above population baseline.
What the hell do people want? I don’t get it. Like, I think it’s overall not clear to me that the whole AI safety and rationality thing is working out because maybe all the things we are doing are hastening the end of the world and not helping that much, but by naive measures of success, power and influence the rationalists are winning beyond what anyone I think was reasonably expecting, and far far above what any similar demographic group appears to be doing.
Rationalists are winning. Maybe it’s not enough because existential risk is still real. But man, I sure absolutely am not living a life where I feel like my social circle and extended community is failing to have an effect on the world, or ending up with a disappointing amount of wealth, influence, and power. I am worried we are shaping the trajectory of humanity towards worse things, but man, surely we don’t lack markers of conventional success.
Scott’s comment linked in another comment here sums up the expectations at the time. I am not sure if a plain list like this gives a different impression, but note that my sentiment for the talk wasn’t that rationalists should win more. Rather, I wanted to say that their existing level of success was probably what you should expect.
I find myself questioning this in a few ways.
Who do you consider the most reasonable demographic counterparts? Part of what prompted me to give the talk in 2018 was that, where I looked, rationalist and rationalist-adjacent software engineers weren’t noticeably more successful than software engineers in general. Professional groups seem highly relevant to this comparison.
If we look at the income statistics in SSC surveys (graph and discussion), you see American-tech levels of income (most respondents are American and in tech), but not 5×–10×. It does depend on how you define “long-term rationalists”.
Why evaluate the success of rationalists as a group by an average that includes extreme outliers? This approach can take you some silly places. For example, if Gwern is right about Musk, all but one American citizen with bipolar disorder could have $0 to their name, and they’d still be worth $44k on average[1]. Can you use this fact to say that bipolar American citizens are doing well as a whole? No, you can’t.
The mistaken expectations that built up for the individual success of rationalists weren’t built on the VC model of rare big successes. “We’ll make you really high-variance, and some of you will succeed wildly” wasn’t how people thought about LW. (Again, I think they were wrong to think this at all. I am just explaining their apparent model.)
It’s a tough call. The median life satisfaction score in the 2020 SSC survey (picked as the top search result) is 8 on a 1–10 scale; the “mood scale” is 7. But then a third of those who answered the relevant questions say they either have a diagnosis of or think they may have depression and anxiety. The most common anxiety scores are 3, 2, then 7. A fourth has seriously considered suicide. My holistic impression is that a lot of rationalists online suffer from depression and anxiety, which are anti-happiness.
I agree, some rationalist memes about AI have spread far and wide. Rationalist language like “motte and bailey” has entered the mainstream. It wasn’t the case in 2018[2], and I would want discussions about rationalist success today to acknowledge it. This is along the lines of long-term, collective (as opposed to individual) impact that Scott talks about in the comment.
Of course, Eliezer disagrees that the AI part constitutes a success and seems to think that the memes have been co-opted, e.g., AI safety for “brand safety”.
I think they want superpowers, and some are (were) surprised rationality didn’t give them superpowers. By contrast, you think rationalists are individually quite successful for their demographics, and it’s fine. I think rationalists are about as successful as their demographics, and it’s fine.
According to Forbes Australia, Elon Musk’s net worth is $423 billion. Around 2.8% of the US population of approximately 342 million is estimated by the NIH to be bipolar, giving approximately 9.6 million people. 423 000 ÷ 9.6 = 44 062.5.
Although rationalist terminology had had a success(?) with “virtue signaling”.
Because one of the whole central principles of the Rationalist and EA ethos is scope sensitivity. Almost everyone I know has pursued high-variance strategies because their altruistic/global goals give them much less diminishing returns.
Imagine trying to evaluate the success of Y-Combinator on the median valuation of a Y-Combinator startup. A completely useless exercise. Similarly evaluating the income of a bunch of rationalists pursuing if anything even higher-variance plans on the median outcome is just as useless.
Surely this is dependent on whether you’re evaluating the success of Y-Combinator from the standpoint of Y-Combinator itself, or from the standpoint of “the public” / “humanity” / etc., or from the standpoint of a prospective entrant into YC’s program? It seems to me that you get very different answers in those three cases!
Sure, though I don’t think I understand the relevance? In this case, people were pursuing careers with high expected upside with relative risk-neutrality, and indeed in-aggregate they succeeded extremely enormously well at that (on conventional metrics, I generally think people did so at substantial moral cost, with a lot of money being made by building doomsday machines, but we can set that part aside for now).
It’s also not the case this resulted in a lot of poverty, as even people for whom the high-variance strategies didn’t succeed still usually ended up with high paying software developer jobs. Overall, the distribution of strategies seems to me indeed like it was well-chosen, with a median a few tens of thousands of dollars lower compared to people who just chose stable software engineering careers, and an average many millions higher, which makes sense given that people are trying to solve world-scale problems.[1]
Again, I don’t really endorse many of the strategies that led to this conventional success, but I feel like that is a different conversation.
Well, the relevance is just that from the standpoint of an individual prospective X, the expectation of the value of becoming an X is… not irrelevant, certainly, but also not the only concern or even the main concern; rather, one would like to know the distribution of possible values of becoming an X (and the median outcome is an important summary statistic of that distribution). This is true for most X.
So if I am a startup founder and considering entry into YC’s accelerator program, I will definitely want to judge this option on the basis of the median valuation of a YC startup, not the mean or the maximum or anything of that sort.
Similarly, if I am considering whether “being a rationalist” (or “joining the rationalist community” or “following the rationalists’ prescriptions for life” etc.), I will certainly judge this option on the basis of the median outcome. (Of course I will condition on my own demographic characteristics, and any other personal factors that I think may apply, but… not too much; extreme Inside View is not helpful here.)
Hopefully not the median! That seems kind of insane to me. I agree you will have some preference distribution over outcomes, but clearly “optimizing for the median” is a terrible decision-making process.
Clearly, but that’s also not what I suggested, either prescriptively or descriptively. “An important axis of evaluation” is not the same thing as “the optimization target”.
My point is simple. You said that evaluating based on the median outcome is a “completely useless exercise”. And I am saying: no, it’s not only not useless, but in fact it’s more useful than evaluating based on the mean/expectation (and much more useful than evaluating only based on the mean/expectation), if you are the individual agent who is considering whether to do a thing.
(Optimizing for the mean is, of course, an even more terrible decision-making process. You presumably know this very well, on account of your familiarity with the FTX fiasco.)
EDIT: A rate limit on my comments?? What the hell is this?! (And it’s not listed on the moderation log page, either!)
This sure sounds to me like you said you would use it at the very least as the primary evaluation metric. I think my reading here is reasonable, but fine if you meant something else. I agree the median seems very reasonable as one thing to think about among other things.
Yep, we have downvote-based rate-limits. I think they are reasonable, though not perfect (and my guess is in this case not ideal, and also I expect your comments to get upvoted more and then for the rate limit to disappear).
I would like them to be listed on the moderation log page, but haven’t gotten around to it. You would be welcome to make a PR for that, or anyone else is, and we will probably also get around to it at some point.
How are you selecting your control group? if there’s a certain bar of success to be in the same room with you, then of course the groups don’t look different. My impression is that rationalists disproportionately work at tier 1 or 2 companies. And when they don’t, it’s more likely to be a deliberate choice.
I was comparing software engineers I knew who were and weren’t engaged with rationalist writing and activities. I don’t think they were strongly selected for income level or career success. The ones I met through college were filtered the fact they had entered that college.
It’s possible I underestimate how successful the average rationalist programmer is. There may also be regional variation. For example, in the US and especially around American startup hubs, the advantage may be more pronounced than it was locally for me.
While I think the rationalist folk are outperforming most other groups and individuals, I will sign on to Scott’s proposal to drop the slogan. “Rationalists should win” was developed in the context of a decision theory argument with philosophers who in my opinion had quite insane beliefs and thought that it was rational to choose to lose because of vague aesthetic reasons, and was not intended to connote fully general life advice. Of course I regularly lose games that I play. (Whereby “games” I also refer to real life situations well-modeled by game theory.)
Sure, but I feel like the actual conversation in all of these posts is about whether “the rationalist philosophy works as a tool for winning”, and at least measured by conventional success metrics, the answer is “yes, overwhelmingly so, as far as I can tell”. I agree there is an annoying word-game here that people play with a specific phrase that was intended to convey something else, but the basic question seems like one worth asking for every community one is part of, or philosophy one adopts.
My guess is the people asking such questions really mean “why don’t I win more, despite being a rationalist”, and their criticisms make much more sense as facts about them or mistakes they’ve made which they attribute to holding them back on winning.
(Yudkowsky, 2008, Class Project)
Probably something along the lines of what LW was meant to aspire to above.
See, when you put it like that, I think the reason rationalists don’t win as much as was expected is quite obvious: claims about the power of rationality were significant overpromises from the start.