I think instrumental rationalists should perhaps follow a modified Tarski litany, “If I live in a universe where believing X gets me Y, and I wish Y, then I wish to believe X”. ;-)
Maybe. The main counter-argument concerns the side-effects of self-deception. Perhaps believing X will locally help me achieve Y, but perhaps the walls I put up in my mind to maintain my belief in X, in the face of all the not-X data that I am also needing to navigate, will weaken my ability to think, care, and act with my whole mind.
You can believe a falsity for sake of utility while alieving a truth for sake of sanity. Deep down you know you’re not the best golfer, but there’s no reason to critically analyze your delusions if believing so’s been shown time and time again to make you a better golfer. The problems occur when your occupation is ‘FAI programmer’ or ‘neurosurgeon’ instead of ‘golfer’. But most of us aren’t FAI programmers or neurosurgeons, we just want to actually turn in our research papers on time.
It’s not even really that dangerous, as rationalists can reasonably expect their future selves to update on evidence that their past-inherited beliefs aren’t getting them utility (aren’t true): by this theory, passive avoidance of rationality is epistemically safer than active doublethink (which might not even be possible, as Eliezer points out). If something forces you to really pay attention to your false belief then the active process of introspection will lead to it being destroyed by the truth.
Added: You know, now that I think about it more, the real distinction in question isn’t aliefs and beliefs but instead beliefs and beliefs in beliefs; at least that’s how it works when I introspect. I’m not sure if studies show that performance is increased by belief in belief or if the effect is limited to ‘real’ belief. Therefore my whole first paragraph above might be off-base; anyone know the literature? I just have the secondhand CliffsNote pop-psy version. At any rate the second paragraph still seems reasonably clever… which is a bad sign.
Double added: Mike Blume’s post indicates my first paragraph may not have been off the mark. Belief in belief seems sufficient for performance enhancement. Actually, as far as I can tell, Blume’s post really just kinda wins the debate. Also see JamesAndrix’s comment.
Maybe. The main counter-argument concerns the side-effects of self-deception. Perhaps believing X will locally help me achieve Y, but perhaps the walls I put up in my mind to maintain my belief in X, in the face of all the not-X data that I am also needing to navigate, will weaken my ability to think, care, and act with my whole mind.
Honestly, this sounds to me like compartmentalization to protect the belief that non-compartmentalism is useful, especially since the empirical evidence (both scientific experimentation and simple observation) is overwhelmingly in favor of instrumental advantages to the over-optimistic.
In any case, anticipating an experience has no truth value. I can anticipate having lunch now, for example; is that true or untrue? What if I have something different for lunch than I currently anticipate? Have I weakened my ability to think/care/act with my whole mind?
Also, if we are really talking about the whole mind, then one must consider the “near” mind as well as the “far” one… and they tend to be in resource competition for instrumental goals. To the extent that you think in a purely symbolic way about your goals, you weaken your motivation to actually do anything about them.
What I’m saying is, decompartmentalization of the “far” mind is all well and good, as is having consistency within the “near” mind, and in general, correlation of the near and far minds’ contents. But there are types of epistemic beliefs that we have scads of scientific evidence to show are empirically dangerous to one’s instrumental output, and should therefore be kept out of “near” anticipation.
The level of mental unity (I prefer this to “decompartmentalization”) that makes it impossible to focus productively on a learnable physical/computational performance task, is fortunately impossible to achieve, or at least easy to temporarily drop.
Maybe. The main counter-argument concerns the side-effects of self-deception. Perhaps believing X will locally help me achieve Y, but perhaps the walls I put up in my mind to maintain my belief in X, in the face of all the not-X data that I am also needing to navigate, will weaken my ability to think, care, and act with my whole mind.
You can believe a falsity for sake of utility while alieving a truth for sake of sanity. Deep down you know you’re not the best golfer, but there’s no reason to critically analyze your delusions if believing so’s been shown time and time again to make you a better golfer. The problems occur when your occupation is ‘FAI programmer’ or ‘neurosurgeon’ instead of ‘golfer’. But most of us aren’t FAI programmers or neurosurgeons, we just want to actually turn in our research papers on time.
It’s not even really that dangerous, as rationalists can reasonably expect their future selves to update on evidence that their past-inherited beliefs aren’t getting them utility (aren’t true): by this theory, passive avoidance of rationality is epistemically safer than active doublethink (which might not even be possible, as Eliezer points out). If something forces you to really pay attention to your false belief then the active process of introspection will lead to it being destroyed by the truth.
Added: You know, now that I think about it more, the real distinction in question isn’t aliefs and beliefs but instead beliefs and beliefs in beliefs; at least that’s how it works when I introspect. I’m not sure if studies show that performance is increased by belief in belief or if the effect is limited to ‘real’ belief. Therefore my whole first paragraph above might be off-base; anyone know the literature? I just have the secondhand CliffsNote pop-psy version. At any rate the second paragraph still seems reasonably clever… which is a bad sign.
Double added: Mike Blume’s post indicates my first paragraph may not have been off the mark. Belief in belief seems sufficient for performance enhancement. Actually, as far as I can tell, Blume’s post really just kinda wins the debate. Also see JamesAndrix’s comment.
Honestly, this sounds to me like compartmentalization to protect the belief that non-compartmentalism is useful, especially since the empirical evidence (both scientific experimentation and simple observation) is overwhelmingly in favor of instrumental advantages to the over-optimistic.
In any case, anticipating an experience has no truth value. I can anticipate having lunch now, for example; is that true or untrue? What if I have something different for lunch than I currently anticipate? Have I weakened my ability to think/care/act with my whole mind?
Also, if we are really talking about the whole mind, then one must consider the “near” mind as well as the “far” one… and they tend to be in resource competition for instrumental goals. To the extent that you think in a purely symbolic way about your goals, you weaken your motivation to actually do anything about them.
What I’m saying is, decompartmentalization of the “far” mind is all well and good, as is having consistency within the “near” mind, and in general, correlation of the near and far minds’ contents. But there are types of epistemic beliefs that we have scads of scientific evidence to show are empirically dangerous to one’s instrumental output, and should therefore be kept out of “near” anticipation.
The level of mental unity (I prefer this to “decompartmentalization”) that makes it impossible to focus productively on a learnable physical/computational performance task, is fortunately impossible to achieve, or at least easy to temporarily drop.