What’s so great about rationality anyway? I care a lot about life and would find it a pity if it went extinct, but I don’t care so much about rationality, and specifically I don’t really see why having the human-style half-assed implementation of it around is considered a good idea.
“Rationality” as used around here indicates “succeeding more often”. Or if you prefer, “Rationality is winning”.
That’s the idea. From the looks of it, most of us either suck at it, or only needed it for minor things in the first place, or are improving slowly enough that it’s indistinguishable from “I used more flashcards this month”. (Or maybe I just suck at it and fail to notice actually impressive improvements people have made; that’s possible, too.)
[Edit: CFAR seems to have a better reputation for teaching instrumental rationality than LessWrong, which seems to make sense. Too bad it’s a geographically bound organization with a price tag.]
It would be very useful to somehow measure rationality and winning, so we could say something about the correlation. Or at least to measure winning, so we could say whether CFAR lessons contribute to winning.
Sometimes income is used as a proxy for winning. It has some problems. For our purposes I would guess a big problem is that the changes of income within a year or two (since when CFAR provides workshops) are mostly noise. (Also, for employees this metric could be more easily optimized by preparing them for job interviews, helping them to optimize their CVs, and pressuring them into doing as many interviews as possible.)
The biggest issue with using income as a metric for ‘winning’ is that some people—in fact, most people—do not really have income as their sole goal, or even as their most important one. For most people, things like having social standing, respect, and importance, are far more important.
I think the point was government handout programs. This is a massive external control on many people’s incomes, and it is part of how the world is not a meritocracy.
(Please note, I ADBOC with CellBioGuy, so don’t take my description as anything more than a summary of what I think he is trying to say.)
This is closer to what I was getting at. Above someone mentioned government assistance programs, which is also true to a point but not really what I meant (another ‘disagree connotatively’).
I was mostly going for the fact that circumstances of birth (family and status not genetics), location, and locked-in life history have far more to do with income than a lot of other factors. And those who make it REALLY big are almost without exception extremely lucky rather than extremely good.
The value of income varies pretty widely across time and place (let alone between different people), so using it as a metric for “winning” is highly problematic. For instance, I was mostly insensitive to my income before getting married (and especially having my first child) beyond being able to afford rent, internet, food, and a few other things. The problem is, I don’t know of any other single number that works better.
It would be very useful to somehow measure rationality and winning, so we could say something about the correlation.
Since in the local vernacular rationality is winning, you need no measures: the correlation is 1 by defintion :-/
Sometimes income is used as a proxy for winning.
It’s a very bad proxy as “winning” is, more or less, “achieving things you care about” and income is a rather poor measure of that. For the LW crowd, anyway.
talk of “rationality as winning” is about instrumental rationality; when Viliam talks about the correlation between rationality and winning, it’s not clear whether it’s instrumental rationality (taking the best decisions towards your goals) or epistemic rationality (having true beliefs), but the second one is more likely.
But even if it’s about instrumental rationality, I wouldn’t say that the correlation is 1 by definition: I’d say winning is a combination of luck, resources/power, and instrumental rationality.
winning is a combination of luck, resources/power, and instrumental rationality
Exactly. And the question is how much can we increase this result using the CFAR’s rationality improving techniques. Would better rationality on average increase your winning by 1%, 10%, 100%, or 1000%? The values 1% and 10% would probably be lost in the noise of luck.
Also, what is the distribution curve for the gains of rationality among population? An average gain of 100% could mean that everyone gains 100%, in which case you would have a lot of “proofs that rationality works”, but it could also mean that 1 person in 10 gains 1000% and 9 of 10 gain nothing; in which case you would have a lof of “proofs that rationality doesn’t work” and a few exceptions that could be explained away (e.g. by saying that they were so talented that they would get the same results also without CFAR).
It would be also interesting to know the curve for increases in winning by increases in rationality. Maybe rationality gives compound interest; becoming +1 rational can give you 10% more winning, but becoming +2 and +3 rational gives you 30% and 100% more winning, because your rationality techniques combine, and because by removing the non-rational parts of your life you gain additional resources. Or maybe it is actually the other way round; becoming +1 rational gives you 100% more winning, and becoming +2 and +3 rational only gives you additional 10% and 1% more winning, because you have already picked all the low-hanging fruit.
The shape of this curve, if known, could be important for CFAR’s strategy. If rationality follows the compound interest model, then CFAR should pick some of their brightest students and fully focus on optimizing them. On the other hand, if the low-hanging fruit is more likely, CFAR should focus on some easy-to-replicate elementary lessons and try to get as much volunteers as possible to teach them to everyone in sight.
By the way, for the efficient altruist subset of LW crowd, income (its part donated to effective charity) is a good proxy for winning.
That is a possible and likely model, but it seems to me that we should not stop the analysis here.
Let’s assume that rationality works mostly by preventing failures. As a simple mathematical model, we have a biased coin that generates values “success” and “failure”. For a typical smart but not rational person, the coin generates 90% “success” and 10% “failure”. For an x-rationalist, the coin generates 99% “success” and 1% “failure”. If your experiment consists of doing one coin flip and calculating the winners, most winners will not be x-rationalists, simply because of the base rates.
But are these coin flips always taken in isolation, or is it possible to create more complex games? For example, if the goal is to flip the coin 10 times and have 10 “successes”, then the players have total chances of 35% vs 90%. That seems like a greater difference, although the base rates would still dwarf this.
My point is, if your magical power is merely preventing some unlikely failures, you should have a visible advantage in situations which are complex in a way that makes hundreds of such failures possible. A person without the magical power would be pretty likely to fail at some point, even if each individual failure would be unlikely.
I just don’t know what (if anything) in the real world corresponds to this. Maybe the problem is that preventing hundreds of different unlikely failures would simply take too much time for a single person.
This is getting better, slowly. Workshops are going on in Melbourne sometime in early 2014 (February?), and they’re looking to do more internationals going forward.
Rationality is the process of humans getting provably better at predicting the future. Evidence based medicine is rational. “traditional” and “spiritual” medicine are not rational when their practitioners and customers don’t really care whether their impression that they work stands up to any kind of statistical analysis. Physics is rational, its hypotheses are all tested and open to retesting against experiment, against reality.
When it comes to “winning,” it needs to be pointed out that rationality when consciously practiced allows humans to meet their consciously perceived and explicitly stated goals more reliably. You need to be rational to notice that this is true, but it isn’t a lot more of a leap than “I think therefore i am.”
One could analyze things and conclude that rationality does not enhance humanities prospects for surviving our own sun’s supernova, or does not materially enhance your own chances of immortality, both of which I imagine strong cases could be made for. While being rational, I continue to pursue pleasure and happiness and satisfaction in ways that don’t always make sense to other rationalists and to the extent that I find satisfaction and pleasure and happiness, i don’t much care that other rationalists do not think what I am doing makes sense. But ultimately, I look at the pieces of my life, and my decisions, through rational lenses whenever I am interested in understanding what is going on, which is not all the time.
Rationality is a great tool. It is something we can get better at, by understanding things like physics, chemistry, engineering, applied math, economics and so on, and by by understanding human mind biases and ways to avoid them. It is something that sets humans apart from other life on the planet and something that sets many of us apart from many other humans on the planet, being a strength many of us have over those other humans we compete with for status and mates and so on. Rationality is generally great fun, like learning to drive fast or to fly a plane.
And if you use it right, you can get laid, and then have more data available for determining if that’s what you REALLY want.
So far, humans are the life’s best bet for surviving the day our Sun goes supernova.
Not to detract from your point, but that’s pretty unlikely. Unless it becomes a part of a tight binary star several billion years down the road, when it has turned into a white dwarf. Of course, by then Earth will have been destroyed during the Sun’s red giant stage.
So far, humans are the life’s best bet for surviving the day our Sun goes supernova.
This is a pedantic point in context, but our solar system almost certainly isn’t going to develop into a supernova. There’s quite a menagerie of described or proposed supernova types, but all result either from core collapse in a very massive star (more than eight or so solar masses) or from accretion of mass (usually from a giant companion) onto a white dwarf star.
A close orbit around a giant star will sterilize Earth almost as well, though, and that is developmentally likely. Though last I heard, Earth’s thought to become uninhabitable well before the Sun develops into a giant stage, as it’s growing slowly more luminous over time.
Bringing life to the stars seems a worthy goal, but if we could achieve it by building an AI that wipes out humanity as step 0 (they’re too resource intensive), shouldn’t we do that? Say the AI awakes, figures out that the probability of intelligence given life is very high, but that the probability of life staying around given the destructive tendencies of human intelligence is not so good. Call it an ecofascist AI if you want. Wouldn’t that be desirable iff the probabilities are as stated?
I think you’re wrong about your own preferences. In particular, can you think of any specific humans that you like? Surely the value of humanity is at least the value of those people.
Then there may, indeed, be no rational argument (or any argument) that will convince you; a fundamental disagreement on values is not a question of rationality. If the disagreement is sufficiently large—the canonical example around here being the paperclip maximiser—then it may be impossible to settle it outside of force. Now, as you are not claiming to be a clippy—what happened to Clippy, anyway? - you are presumably human at least genetically, so you’ll forgive me if I suspect a certain amount of signalling in your misanthropic statements. So your real disagreement with LW thoughts may not be so large as to require force. How about if we just set aside a planet for you, and the rest of us spread out into the universe, promising not to bother you in the future?
CAE_Jones answered the first part of your question. As for the second part, the human-style half-assed implementation of it is the best we can do in many circumstances, because bringing to bear the full machinery of mathematical logic would be prohibitively difficult for many things. However, just because it’s hard to talk about things in fully logical terms, doesn’t mean we should just throw up our hands and just pick random viewpoints. We can take steps to improve our reasoning, even with our mushy illogical biological brains.
What’s so great about rationality anyway? I care a lot about life and would find it a pity if it went extinct, but I don’t care so much about rationality, and specifically I don’t really see why having the human-style half-assed implementation of it around is considered a good idea.
“Rationality” as used around here indicates “succeeding more often”. Or if you prefer, “Rationality is winning”.
That’s the idea. From the looks of it, most of us either suck at it, or only needed it for minor things in the first place, or are improving slowly enough that it’s indistinguishable from “I used more flashcards this month”. (Or maybe I just suck at it and fail to notice actually impressive improvements people have made; that’s possible, too.)
[Edit: CFAR seems to have a better reputation for teaching instrumental rationality than LessWrong, which seems to make sense. Too bad it’s a geographically bound organization with a price tag.]
It would be very useful to somehow measure rationality and winning, so we could say something about the correlation. Or at least to measure winning, so we could say whether CFAR lessons contribute to winning.
Sometimes income is used as a proxy for winning. It has some problems. For our purposes I would guess a big problem is that the changes of income within a year or two (since when CFAR provides workshops) are mostly noise. (Also, for employees this metric could be more easily optimized by preparing them for job interviews, helping them to optimize their CVs, and pressuring them into doing as many interviews as possible.)
The biggest issue with using income as a metric for ‘winning’ is that some people—in fact, most people—do not really have income as their sole goal, or even as their most important one. For most people, things like having social standing, respect, and importance, are far more important.
That, and income being massively externally controlled for the majority of people. The world, contrary to reports, is not a meritocracy.
Huh?
If you mean that people don’t necessarily get the income they want, well, duh...
No, it isn’t, but I don’t see the relevance to the previous point.
I think the point was government handout programs. This is a massive external control on many people’s incomes, and it is part of how the world is not a meritocracy.
(Please note, I ADBOC with CellBioGuy, so don’t take my description as anything more than a summary of what I think he is trying to say.)
He might also be saying that most people don’t have an obvious path for marginal increases to their income.
This is closer to what I was getting at. Above someone mentioned government assistance programs, which is also true to a point but not really what I meant (another ‘disagree connotatively’).
I was mostly going for the fact that circumstances of birth (family and status not genetics), location, and locked-in life history have far more to do with income than a lot of other factors. And those who make it REALLY big are almost without exception extremely lucky rather than extremely good.
You what with CellBioGuy..?
Should be “ADBOC”—“agree denotationally, but object connotatively”. (ygert is probably thinking of “disagree” instead of “object”.)
Ah, thanks. I usually think of such things as “technically correct but misleading”—that’s more or less the same thing, right?
Yes.
Yes, my mistake. I was in a rush, and didn’t have time to double check what the acronym was. Edited now.
I think I could make an argument that “object” has a semantic advantage over “disagree” but one advantage is that “adboc” can be pronounced as a two-syllable word.
Yes, this is true. You cannot meaningfully compare incomes between people that, say, live in developed vs. developing countries.
The value of income varies pretty widely across time and place (let alone between different people), so using it as a metric for “winning” is highly problematic. For instance, I was mostly insensitive to my income before getting married (and especially having my first child) beyond being able to afford rent, internet, food, and a few other things. The problem is, I don’t know of any other single number that works better.
Since in the local vernacular rationality is winning, you need no measures: the correlation is 1 by defintion :-/
It’s a very bad proxy as “winning” is, more or less, “achieving things you care about” and income is a rather poor measure of that. For the LW crowd, anyway.
talk of “rationality as winning” is about instrumental rationality; when Viliam talks about the correlation between rationality and winning, it’s not clear whether it’s instrumental rationality (taking the best decisions towards your goals) or epistemic rationality (having true beliefs), but the second one is more likely.
But even if it’s about instrumental rationality, I wouldn’t say that the correlation is 1 by definition: I’d say winning is a combination of luck, resources/power, and instrumental rationality.
Exactly. And the question is how much can we increase this result using the CFAR’s rationality improving techniques. Would better rationality on average increase your winning by 1%, 10%, 100%, or 1000%? The values 1% and 10% would probably be lost in the noise of luck.
Also, what is the distribution curve for the gains of rationality among population? An average gain of 100% could mean that everyone gains 100%, in which case you would have a lot of “proofs that rationality works”, but it could also mean that 1 person in 10 gains 1000% and 9 of 10 gain nothing; in which case you would have a lof of “proofs that rationality doesn’t work” and a few exceptions that could be explained away (e.g. by saying that they were so talented that they would get the same results also without CFAR).
It would be also interesting to know the curve for increases in winning by increases in rationality. Maybe rationality gives compound interest; becoming +1 rational can give you 10% more winning, but becoming +2 and +3 rational gives you 30% and 100% more winning, because your rationality techniques combine, and because by removing the non-rational parts of your life you gain additional resources. Or maybe it is actually the other way round; becoming +1 rational gives you 100% more winning, and becoming +2 and +3 rational only gives you additional 10% and 1% more winning, because you have already picked all the low-hanging fruit.
The shape of this curve, if known, could be important for CFAR’s strategy. If rationality follows the compound interest model, then CFAR should pick some of their brightest students and fully focus on optimizing them. On the other hand, if the low-hanging fruit is more likely, CFAR should focus on some easy-to-replicate elementary lessons and try to get as much volunteers as possible to teach them to everyone in sight.
By the way, for the efficient altruist subset of LW crowd, income (its part donated to effective charity) is a good proxy for winning.
Also, rationality might mostly work by making disaster less common—it’s not so much that the victories are bigger as that fewer of them are lost.
That is a possible and likely model, but it seems to me that we should not stop the analysis here.
Let’s assume that rationality works mostly by preventing failures. As a simple mathematical model, we have a biased coin that generates values “success” and “failure”. For a typical smart but not rational person, the coin generates 90% “success” and 10% “failure”. For an x-rationalist, the coin generates 99% “success” and 1% “failure”. If your experiment consists of doing one coin flip and calculating the winners, most winners will not be x-rationalists, simply because of the base rates.
But are these coin flips always taken in isolation, or is it possible to create more complex games? For example, if the goal is to flip the coin 10 times and have 10 “successes”, then the players have total chances of 35% vs 90%. That seems like a greater difference, although the base rates would still dwarf this.
My point is, if your magical power is merely preventing some unlikely failures, you should have a visible advantage in situations which are complex in a way that makes hundreds of such failures possible. A person without the magical power would be pretty likely to fail at some point, even if each individual failure would be unlikely.
I just don’t know what (if anything) in the real world corresponds to this. Maybe the problem is that preventing hundreds of different unlikely failures would simply take too much time for a single person.
I suspect rationality does a lot to prevent likely failures as well as unlikely failures.
This is getting better, slowly. Workshops are going on in Melbourne sometime in early 2014 (February?), and they’re looking to do more internationals going forward.
Try this. Do you care about achieving your values?
Rationality is the process of humans getting provably better at predicting the future. Evidence based medicine is rational. “traditional” and “spiritual” medicine are not rational when their practitioners and customers don’t really care whether their impression that they work stands up to any kind of statistical analysis. Physics is rational, its hypotheses are all tested and open to retesting against experiment, against reality.
When it comes to “winning,” it needs to be pointed out that rationality when consciously practiced allows humans to meet their consciously perceived and explicitly stated goals more reliably. You need to be rational to notice that this is true, but it isn’t a lot more of a leap than “I think therefore i am.”
One could analyze things and conclude that rationality does not enhance humanities prospects for surviving our own sun’s supernova, or does not materially enhance your own chances of immortality, both of which I imagine strong cases could be made for. While being rational, I continue to pursue pleasure and happiness and satisfaction in ways that don’t always make sense to other rationalists and to the extent that I find satisfaction and pleasure and happiness, i don’t much care that other rationalists do not think what I am doing makes sense. But ultimately, I look at the pieces of my life, and my decisions, through rational lenses whenever I am interested in understanding what is going on, which is not all the time.
Rationality is a great tool. It is something we can get better at, by understanding things like physics, chemistry, engineering, applied math, economics and so on, and by by understanding human mind biases and ways to avoid them. It is something that sets humans apart from other life on the planet and something that sets many of us apart from many other humans on the planet, being a strength many of us have over those other humans we compete with for status and mates and so on. Rationality is generally great fun, like learning to drive fast or to fly a plane.
And if you use it right, you can get laid, and then have more data available for determining if that’s what you REALLY want.
So far, humans are the life’s best bet for surviving the day our Sun goes supernova.
Because we don’t have better one (yet?).
Not to detract from your point, but that’s pretty unlikely. Unless it becomes a part of a tight binary star several billion years down the road, when it has turned into a white dwarf. Of course, by then Earth will have been destroyed during the Sun’s red giant stage.
This is a pedantic point in context, but our solar system almost certainly isn’t going to develop into a supernova. There’s quite a menagerie of described or proposed supernova types, but all result either from core collapse in a very massive star (more than eight or so solar masses) or from accretion of mass (usually from a giant companion) onto a white dwarf star.
A close orbit around a giant star will sterilize Earth almost as well, though, and that is developmentally likely. Though last I heard, Earth’s thought to become uninhabitable well before the Sun develops into a giant stage, as it’s growing slowly more luminous over time.
Bringing life to the stars seems a worthy goal, but if we could achieve it by building an AI that wipes out humanity as step 0 (they’re too resource intensive), shouldn’t we do that? Say the AI awakes, figures out that the probability of intelligence given life is very high, but that the probability of life staying around given the destructive tendencies of human intelligence is not so good. Call it an ecofascist AI if you want. Wouldn’t that be desirable iff the probabilities are as stated?
As a human, I find solutions that destroy all humans to be less than ideal. I’d prefer a solution that curbs our “destructive tendencies”, instead.
But is there a rational argument for that? Because on a gut level, I just don’t like humans all that much.
I think you’re wrong about your own preferences. In particular, can you think of any specific humans that you like? Surely the value of humanity is at least the value of those people.
Then there may, indeed, be no rational argument (or any argument) that will convince you; a fundamental disagreement on values is not a question of rationality. If the disagreement is sufficiently large—the canonical example around here being the paperclip maximiser—then it may be impossible to settle it outside of force. Now, as you are not claiming to be a clippy—what happened to Clippy, anyway? - you are presumably human at least genetically, so you’ll forgive me if I suspect a certain amount of signalling in your misanthropic statements. So your real disagreement with LW thoughts may not be so large as to require force. How about if we just set aside a planet for you, and the rest of us spread out into the universe, promising not to bother you in the future?
CAE_Jones answered the first part of your question. As for the second part, the human-style half-assed implementation of it is the best we can do in many circumstances, because bringing to bear the full machinery of mathematical logic would be prohibitively difficult for many things. However, just because it’s hard to talk about things in fully logical terms, doesn’t mean we should just throw up our hands and just pick random viewpoints. We can take steps to improve our reasoning, even with our mushy illogical biological brains.