That looks like losing your rationality by reading LessWrong. As does this by XiXiDu that he links to.
A couple of quotes from the latter strike me:
Logical implications just don’t seem enough in some cases.
and
Until the above problems are resolved, or sufficiently established, I will continue to put vastly more weight on empirical evidence and my intuition than on logical implications
That is as it should be. Blindly following logic wherever it takes you is like strapping yourself to a rocket with no steering.
I’ve never been able to make sense of the traditional koans, not because I find them hard puzzles, but because I don’t even see what puzzle is being posed. But we have here in the LessWrong material koans aplenty.
Mentalism cannot be true! Physicalism cannot be true!
Bayesian reasoning is the only way! We cannot do Bayesian reasoning!
Aumann agreement! Dissension among rational people!
Human intelligence is possible! After sixty years of trying we haven’t the slightest idea how!
Trolley problems!
TORTURE vs. SPECKS!
Quantum suicide!
Give me all your money and I’ll repay you 3^^^3-fold!
The Utility Monster!
The Repugnant Conclusion!
You spend one dead child at Starbucks every year!
Vast stakes depend on your slightest decision! You cannot evaluate them! You must evaluate them!
You have six hours to cut down a tree! It will take twelve hours to sharpen your axe! The first god we make will torture you forever for failing!
Until very recently I thought it might be just me and that you people can calculate what you should do. But then I learnt that even important SI donors have similar problems. And other people as well.
‘Some years ago I was trying to decide whether or not to move to Harvard from Stanford. I had bored my friends silly with endless discussion. Finally, one of them said, “You’re one of our leading decision theorists. Maybe you should make a list of the costs and benefits and try to roughly calculate your expected utility.” Without thinking, I blurted out, “Come on, Sandy, this is serious.”’
No argument from me on any of that. I’ve said a couple of times on LW that I don’t believe that people have or can have utility functions (I’m not alone on LW there), that approximating Solomonoff induction is impractical on at least an NP scale, and that large-world Bayesianism is tantamount to AGI, which nobody knows how to do. (Framing the AGI problem as LWB doesn’t help solve AGI, it’s an argument against expecting to succeed at LWB.)
But where does that leave us? What does one do instead?
To abandon “rationality” is to throw out the baby with the bathwater. The application of Bayes’ theorem to screening tests for rare conditions remains just as valid, just as practical, and just as essential for making correct medical decisions. Noticing that a discussion has wandered into disputing definitions remains just as valid a sign that the discussion has done wrong. The usefulness of checklists for ensuring that complex procedures are reliably performed does not go away. When thine I offends thee, try the many practical pieces of advice for personal development that have appeared here or elsewhere and see what one can make work. And so on.
All of this “small-world” rationality may not be as exciting, to some temperaments, as talking about AGIs, uploads, Tegmark multiverses, and 3^^^3 specks, but it has the great advantage (to my temperament) of having an actual subject matter here and now, and of not driving people crazy who take it seriously. It works when taken seriously. And there’s this to consider. It counts against the wilder speculations of LW just as much as against kobolds.
For the rest, my rule of thumb (the small-world response to large-world problems, as Diaconis says in that paper) for deciding its worth is this: if it isn’t being done with mathematics, it’s useless. A necessary condition, not a sufficient one, but it weeds out almost everything. AGI? Make one. Friendliness proof methods? Write it up in the Journal of Symbolic Logic. TDT? Ditto.
Aumann agreement! Dissension among rational people!
This one’s easy; I’m guessing this is about “rational” people (lesswrongers for instance) disagreeing. “Rational” in the above sentence isn’t the same as rational as defined in Aumann’s paper.
Specifically, we’re human beings, two of us don’t necessarily have the same priors, or common knowledge of a posterior A for every possible event A. So we’re bound to disagree sometimes.
I really hate when people tease me with promises of my more confident beliefs being wrong. Hey, everyone: I’ve read around as much as y’all on rationality, physics, geography, and I just wanted to tell you that the Earth is flat. Yeah, yeah I know it’s hard to believe, but you’re just going to have to trust me. I might go in to more detail some other time, but y’know it would be kind of a lot of work (I think)… soooo probably not, but maybe! maybe… anyway you’re wrong.
Guys, guys.. I can see you’re upset, but I’m posting this on the internet for my benefit not yours.
Physicalism isn’t actually making any sense. It is said that a real answer should make things less mysterious.
and
To use a programmer saying, “Some people when confronted with the hard problem of consciousness think, ‘I know, I’ll use reductionism!’. Now they have two problems.”.
If you have stripped down your motorbike engine and rebuilt it and gone for a ride, you have a paradigm of reductionism. You strip the thing down, to a few hundred parts (include all the little bits, the nuts, cir-clips, and washers) and if your are clever and don’t lose any bits and do the adjustments correctly, it works when you reassemble it and you pass the test. You understand how it works and you know that you understand how it works.
So naturally you carry that expectation over to the hard problems of consciousness. You study the brain, decide its really just a computer, and start coding a brain in Java. After failing hard, you realise that your motorbike engine paradigm just lost a connecting rod through its crankcase. Reductionism hasn’t made the brain any less mysterious, at least, not in the hands-on sense that you hoped. The brain is too intricately detailed to be understood mechanically, where mechanically is understood as implying comprehensible working parts, and hundreds of them, rather than comprehensible working parts and hundreds of billions of them.
So what next? My answer is that the human brain will always be mysterious, in the same sense that the source code of the Windows operating system will always be mysterious. There is too much of it; too much intricate detail. I just shrug. I never took mechanical to imply sufficiently few working parts that I can actually understand the mechanism. But I can seen lots of people understanding the mechanical aspect of physicalism through some kind of implicit motorbike engine metaphor and correctly realising that reductionism isn’t going to explain consciousness in the way that they had hoped.
I can’t quite understand what the author means in the relevant section of that link, and given that he’s consciously rejected consequentialism, my prior is low that what he has to say on the topic is very useful.
I think there’s a risk of losing your rationality by practicing mediation.
That looks like losing your rationality by reading LessWrong. As does this by XiXiDu that he links to.
A couple of quotes from the latter strike me:
and
That is as it should be. Blindly following logic wherever it takes you is like strapping yourself to a rocket with no steering.
I’ve never been able to make sense of the traditional koans, not because I find them hard puzzles, but because I don’t even see what puzzle is being posed. But we have here in the LessWrong material koans aplenty.
Mentalism cannot be true! Physicalism cannot be true!
Bayesian reasoning is the only way! We cannot do Bayesian reasoning!
Aumann agreement! Dissension among rational people!
Human intelligence is possible! After sixty years of trying we haven’t the slightest idea how!
Trolley problems!
TORTURE vs. SPECKS!
Quantum suicide!
Give me all your money and I’ll repay you 3^^^3-fold!
The Utility Monster!
The Repugnant Conclusion!
You spend one dead child at Starbucks every year!
Vast stakes depend on your slightest decision! You cannot evaluate them! You must evaluate them!
You have six hours to cut down a tree! It will take twelve hours to sharpen your axe! The first god we make will torture you forever for failing!
Until very recently I thought it might be just me and that you people can calculate what you should do. But then I learnt that even important SI donors have similar problems. And other people as well.
The problem is that all the talk about approximations is complete handwaving and that you really can’t calculate shit. And even if you could, there doesn’t seem to be anything medium-probable that you could do about it.
— Persi Diaconis, in The Problem of Thinking Too Much
No argument from me on any of that. I’ve said a couple of times on LW that I don’t believe that people have or can have utility functions (I’m not alone on LW there), that approximating Solomonoff induction is impractical on at least an NP scale, and that large-world Bayesianism is tantamount to AGI, which nobody knows how to do. (Framing the AGI problem as LWB doesn’t help solve AGI, it’s an argument against expecting to succeed at LWB.)
But where does that leave us? What does one do instead?
To abandon “rationality” is to throw out the baby with the bathwater. The application of Bayes’ theorem to screening tests for rare conditions remains just as valid, just as practical, and just as essential for making correct medical decisions. Noticing that a discussion has wandered into disputing definitions remains just as valid a sign that the discussion has done wrong. The usefulness of checklists for ensuring that complex procedures are reliably performed does not go away. When thine I offends thee, try the many practical pieces of advice for personal development that have appeared here or elsewhere and see what one can make work. And so on.
All of this “small-world” rationality may not be as exciting, to some temperaments, as talking about AGIs, uploads, Tegmark multiverses, and 3^^^3 specks, but it has the great advantage (to my temperament) of having an actual subject matter here and now, and of not driving people crazy who take it seriously. It works when taken seriously. And there’s this to consider. It counts against the wilder speculations of LW just as much as against kobolds.
For the rest, my rule of thumb (the small-world response to large-world problems, as Diaconis says in that paper) for deciding its worth is this: if it isn’t being done with mathematics, it’s useless. A necessary condition, not a sufficient one, but it weeds out almost everything. AGI? Make one. Friendliness proof methods? Write it up in the Journal of Symbolic Logic. TDT? Ditto.
This one’s easy; I’m guessing this is about “rational” people (lesswrongers for instance) disagreeing. “Rational” in the above sentence isn’t the same as rational as defined in Aumann’s paper.
Specifically, we’re human beings, two of us don’t necessarily have the same priors, or common knowledge of a posterior A for every possible event A. So we’re bound to disagree sometimes.
I really hate when people tease me with promises of my more confident beliefs being wrong. Hey, everyone: I’ve read around as much as y’all on rationality, physics, geography, and I just wanted to tell you that the Earth is flat. Yeah, yeah I know it’s hard to believe, but you’re just going to have to trust me. I might go in to more detail some other time, but y’know it would be kind of a lot of work (I think)… soooo probably not, but maybe! maybe… anyway you’re wrong.
Guys, guys.. I can see you’re upset, but I’m posting this on the internet for my benefit not yours.
I can sympathize a little with his first point
and
If you have stripped down your motorbike engine and rebuilt it and gone for a ride, you have a paradigm of reductionism. You strip the thing down, to a few hundred parts (include all the little bits, the nuts, cir-clips, and washers) and if your are clever and don’t lose any bits and do the adjustments correctly, it works when you reassemble it and you pass the test. You understand how it works and you know that you understand how it works.
So naturally you carry that expectation over to the hard problems of consciousness. You study the brain, decide its really just a computer, and start coding a brain in Java. After failing hard, you realise that your motorbike engine paradigm just lost a connecting rod through its crankcase. Reductionism hasn’t made the brain any less mysterious, at least, not in the hands-on sense that you hoped. The brain is too intricately detailed to be understood mechanically, where mechanically is understood as implying comprehensible working parts, and hundreds of them, rather than comprehensible working parts and hundreds of billions of them.
So what next? My answer is that the human brain will always be mysterious, in the same sense that the source code of the Windows operating system will always be mysterious. There is too much of it; too much intricate detail. I just shrug. I never took mechanical to imply sufficiently few working parts that I can actually understand the mechanism. But I can seen lots of people understanding the mechanical aspect of physicalism through some kind of implicit motorbike engine metaphor and correctly realising that reductionism isn’t going to explain consciousness in the way that they had hoped.
explain please?
I can’t quite understand what the author means in the relevant section of that link, and given that he’s consciously rejected consequentialism, my prior is low that what he has to say on the topic is very useful.