Anti-epistemology is a huge actual danger of actual life,
So it is, but I’m wondering if anyone can suggest a (possibly very exotic) real-life example where “epistemic rationality gives way to instrumental rationality.”? Just to address the “hypothetical scenario” objection.
EDIT: Does the famous Keynes quote “Markets can remain irrational a lot longer than you and I can remain solvent.” qualify?
Situations of plausible deniability for politicians or people in charge of large departments at corporations. Of course you could argue that these situations are bad for society in general, but I’d say it’s in the instrumental interest of those leaders to seek the truth to a lesser degree.
Any time you have a bias you cannot fully compensate for, there is a potential benefit to putting instrumental rationality above epistemic.
One fear I was unable to overcome for many years was that of approaching groups of people. I tried all sorts of things, but the best piece advice turned out to be: “Think they’ll like you.” Simply believing that eliminates the fear and aids in my social goals, even though it sometimes proves to have been a false belief, especially with regard to my initial reception. Believing that only 3 out of 4 groups will like or welcome me initially and 1 will rebuff me, even though this may be the case, has not been as useful as believing that they’ll all like me.
It doesn’t sound like you were very successful at rewriting this belief, because you admit in the very same paragraph that your supposedly rewritten belief is false. What I think you probably did instead is train yourself to change the subject of your thoughts in that situation from “what will I do if they don’t like me” to “what will I do if they like me”, and maybe also rewrite your values so that you see being rebuffed as inconsequential and not worth thinking about. Changing the subject of your thoughts doesn’t imply a change in belief unless you believe that things vanish when you stop thinking about them.
Let’s suppose that if you believe that when you believe you have a chance X to succeed, you actually have a chance 0.75 X to succeed (because you can’t stop your beliefs from influencing your behavior). The winning strategy seems to believe in 100% success, and thus succeed in 75% of cases. On the other hand, trying too much to find a value of X which brings exact predictions, would bring one to believing in 0% success… and being right about it. So in this (not so artificial!) situation, a rationalist should prefer success to being right.
But in real life, unexpected things happen. Imagine that you somehow reprogram yourself to genuinely believe that you have 100% of success… and then someone comes and offers you a bet: you win $100 if you succeed, and lose $10000 if you fail. In you genuinely believe in 100% success, this seems like an offer of free money, so you take the bet. Which you probably shouldn’t.
For an AI, a possible solution could be this: Run your own simulation. Make this simulation believe that the chance of success is 100%, while you know that it really is 75%. Give the simulation access to all inputs and outputs, and just let it work. Take control back when the task is completed, or when something very unexpected happens. -- The only problem is to balance the right level of “unexpected”; to know the difference between random events that belong to the task, and the random events outside of the initially expected scenario.
I suppose evolution gave us similar skills, though not so precisely defined as in the case of AI. An AI simulating itself would need twice as much memory and time; instead of this, humans use compartmentalization as an efficient heuristic. Instead of having one personality that believes in 100% success, and another that believes in 75%, human just convices themselves that the chance of success is 100%, but prevents this belief from propagating too far, so they can take the benefits of the imaginary belief, while avoiding some of its costs. This heuristic is a net advantage, though sometimes it fails, and other people may be able to exploit it: to use your own illusions to bring you to a logical decision that you should take the bet, while avoiding a suspicion of something unusual. -- In this situation there is no original AI which could take over control, so this strategy of false beliefs is accompanied by a rule “if there is something very unusual, avoid it, even if it logically seems like the right thing to do”. It means to not trust your own logic, which in a given situation is very reasonable.
I do this every day, correctly predicting I’ll never succeed at stuff and not getting placebo benefits. Don’t dare try compartmentalization or self delusion for the reasons Eliezer has outlined. Some other complicating factors. Big problem for me.
Be careful of this sort of argument, any time you find yourself defining the “winner” as someone other than the agent who is currently smiling from on top of a giant heap of utility.
Yea, I know that, but I’m not convinced fooling myself wont result in something even worse. Better ineffectively doing good than effectively doing evil.
As part of a fitness reigime, you might try to convince yourself that “I have to do 50 pressups every day”. Strictly speaking, you don’t: if you do fewer every now and again it won’t matter too much. Nonetheless, if you believe this your will will crumble and you’ll slack of too regularly. So you try to forget about that fact.
So it is, but I’m wondering if anyone can suggest a (possibly very exotic) real-life example where “epistemic rationality gives way to instrumental rationality.”? Just to address the “hypothetical scenario” objection.
EDIT: Does the famous Keynes quote “Markets can remain irrational a lot longer than you and I can remain solvent.” qualify?
Situations of plausible deniability for politicians or people in charge of large departments at corporations. Of course you could argue that these situations are bad for society in general, but I’d say it’s in the instrumental interest of those leaders to seek the truth to a lesser degree.
Any time you have a bias you cannot fully compensate for, there is a potential benefit to putting instrumental rationality above epistemic.
One fear I was unable to overcome for many years was that of approaching groups of people. I tried all sorts of things, but the best piece advice turned out to be: “Think they’ll like you.” Simply believing that eliminates the fear and aids in my social goals, even though it sometimes proves to have been a false belief, especially with regard to my initial reception. Believing that only 3 out of 4 groups will like or welcome me initially and 1 will rebuff me, even though this may be the case, has not been as useful as believing that they’ll all like me.
It doesn’t sound like you were very successful at rewriting this belief, because you admit in the very same paragraph that your supposedly rewritten belief is false. What I think you probably did instead is train yourself to change the subject of your thoughts in that situation from “what will I do if they don’t like me” to “what will I do if they like me”, and maybe also rewrite your values so that you see being rebuffed as inconsequential and not worth thinking about. Changing the subject of your thoughts doesn’t imply a change in belief unless you believe that things vanish when you stop thinking about them.
Let’s suppose that if you believe that when you believe you have a chance X to succeed, you actually have a chance 0.75 X to succeed (because you can’t stop your beliefs from influencing your behavior). The winning strategy seems to believe in 100% success, and thus succeed in 75% of cases. On the other hand, trying too much to find a value of X which brings exact predictions, would bring one to believing in 0% success… and being right about it. So in this (not so artificial!) situation, a rationalist should prefer success to being right.
But in real life, unexpected things happen. Imagine that you somehow reprogram yourself to genuinely believe that you have 100% of success… and then someone comes and offers you a bet: you win $100 if you succeed, and lose $10000 if you fail. In you genuinely believe in 100% success, this seems like an offer of free money, so you take the bet. Which you probably shouldn’t.
For an AI, a possible solution could be this: Run your own simulation. Make this simulation believe that the chance of success is 100%, while you know that it really is 75%. Give the simulation access to all inputs and outputs, and just let it work. Take control back when the task is completed, or when something very unexpected happens. -- The only problem is to balance the right level of “unexpected”; to know the difference between random events that belong to the task, and the random events outside of the initially expected scenario.
I suppose evolution gave us similar skills, though not so precisely defined as in the case of AI. An AI simulating itself would need twice as much memory and time; instead of this, humans use compartmentalization as an efficient heuristic. Instead of having one personality that believes in 100% success, and another that believes in 75%, human just convices themselves that the chance of success is 100%, but prevents this belief from propagating too far, so they can take the benefits of the imaginary belief, while avoiding some of its costs. This heuristic is a net advantage, though sometimes it fails, and other people may be able to exploit it: to use your own illusions to bring you to a logical decision that you should take the bet, while avoiding a suspicion of something unusual. -- In this situation there is no original AI which could take over control, so this strategy of false beliefs is accompanied by a rule “if there is something very unusual, avoid it, even if it logically seems like the right thing to do”. It means to not trust your own logic, which in a given situation is very reasonable.
I do this every day, correctly predicting I’ll never succeed at stuff and not getting placebo benefits. Don’t dare try compartmentalization or self delusion for the reasons Eliezer has outlined. Some other complicating factors. Big problem for me.
(from “Newcomb’s Problem and Regret of Rationality”)
Yea, I know that, but I’m not convinced fooling myself wont result in something even worse. Better ineffectively doing good than effectively doing evil.
As part of a fitness reigime, you might try to convince yourself that “I have to do 50 pressups every day”. Strictly speaking, you don’t: if you do fewer every now and again it won’t matter too much. Nonetheless, if you believe this your will will crumble and you’ll slack of too regularly. So you try to forget about that fact.
Kind of like an epistemic Schelling point.