A friend recently complained to me about this post: he said most people do much nonsense under the heading “belief”, and that this post doesn’t acknowledge this adequately. He might be right!
Given his complaint, perhaps I ought to say clearly:
1) I agree — there is indeed a lot of nonsense out there masquerading as sensible/useful cognitive patterns. Some aimed to wirehead or mislead the self; some aimed to deceive others for local benefit; lots of it simple error.
2) I agree also that a fair chunk of nonsense adheres to the term “belief” (and the term “believing in”). This is because there’s a real, useful pattern of possible cognition near our concepts of “belief”, and because nonsense (/lies/self-deception/etc) likes to disguise itself as something real.
3) But — to sort sense from nonsense, we need to understand what the real (useful, might be present in the cogsci books of alien intelligences) pattern is, that is near our “beliefs”. If we don’t:
a) We’ll miss out on a useful way to think. (This is the biggest one.)
b) The parts of the {real, useful way to think} that fall outside our conception of “beliefs” will be practiced noisily anyway, sometimes; sometimes in a true fashion, sometimes mixed (intentionally or accidentally) with error or locally manipulations. We won’t be able to excise these deceptions easily or fully, because it’ll be kinda clear there’s something to real nearby that our concept of “beliefs” doesn’t do justice to, and so people (including us) will not wish to adhere entirely to our concept of “beliefs” in lieu of the so-called “nonsense” that isn’t entirely nonsense. So it’ll be harder to expel actual error.
4) I’m pretty sure that LessWrong’s traditional concept of “beliefs” as “accurate Bayesian predictions about future events” is only half-right, and that we want the other half too, both for (3a) type reasons, and for (3b) type reasons.
a) “Beliefs” as accurate Bayesian predictions is exactly right for beliefs/predictions about things unaffected by the belief itself — beliefs about tomorrow’s weather, or organic chemistry, or the likely behavior of strangers.
b) But there’s a different “belief-math” (or “believing-in math”) that’s relevant for coordinating pieces of oneself in order to take a complex action, and for coordinating multiple people so as to run a business or community other collaborative endeavor. I think I lay it out here (roughly — I don’t have all the math), and I think it matters.
The old LessWrong Sequences-reading crowd *sort of* knew about this — folks talked about how beliefs about matters directly affected by the beliefs could be self-fulfilling or self-undermining prophecies, and how Bayes-math wasn’t defined around here. But when I read those comments, I thought they were discussing an uninteresting edge case. The idioms by which we organize complex actions (within a person, and between people) are part of the bread and butter of how intelligence works; they are not an uninteresting edge case.
Likewise, people talked sometimes (on LW in the past) about they were intentionally holding false beliefs about their start-ups’ success odds; and they were advised not to be clever, and some commenters dissented from this advice. But IMO the “believing in” concept lets us distinguish:
(i) the useful thing such CEOs were up to (holding a target, in detail, that they and others can coordinate action around);
(ii) how to do this without having or requesting false predictions at the same time; and
(iii) how sometimes such action on the part of CEOs/etc is basically “lying” (and “demanding lies”), in the sense that it is designed to extract more work/investment/etc from “allies” than said allies would volunteer if they understood the process generating the CEOs behavior (and demand that their team members are similarly deceptive/extractive). And sometimes it’s not. And there are principles for telling the difference.
All of which is sort of to say that I think this model of “believing in” has substance we can use for the normal human business of planning actions together, and isn’t merely propaganda to mislead people into thinking human thinking bugs are less buggy than they are. Also I think it’s as true to the normal English usage of “believing in” as the historical LW usage of “belief” is to the normal English usage of “belief”.
A friend recently complained to me about this post: he said most people do much nonsense under the heading “belief”, and that this post doesn’t acknowledge this adequately. He might be right!
Given his complaint, perhaps I ought to say clearly:
1) I agree — there is indeed a lot of nonsense out there masquerading as sensible/useful cognitive patterns. Some aimed to wirehead or mislead the self; some aimed to deceive others for local benefit; lots of it simple error.
2) I agree also that a fair chunk of nonsense adheres to the term “belief” (and the term “believing in”). This is because there’s a real, useful pattern of possible cognition near our concepts of “belief”, and because nonsense (/lies/self-deception/etc) likes to disguise itself as something real.
3) But — to sort sense from nonsense, we need to understand what the real (useful, might be present in the cogsci books of alien intelligences) pattern is, that is near our “beliefs”. If we don’t:
a) We’ll miss out on a useful way to think. (This is the biggest one.)
b) The parts of the {real, useful way to think} that fall outside our conception of “beliefs” will be practiced noisily anyway, sometimes; sometimes in a true fashion, sometimes mixed (intentionally or accidentally) with error or locally manipulations. We won’t be able to excise these deceptions easily or fully, because it’ll be kinda clear there’s something to real nearby that our concept of “beliefs” doesn’t do justice to, and so people (including us) will not wish to adhere entirely to our concept of “beliefs” in lieu of the so-called “nonsense” that isn’t entirely nonsense. So it’ll be harder to expel actual error.
4) I’m pretty sure that LessWrong’s traditional concept of “beliefs” as “accurate Bayesian predictions about future events” is only half-right, and that we want the other half too, both for (3a) type reasons, and for (3b) type reasons.
a) “Beliefs” as accurate Bayesian predictions is exactly right for beliefs/predictions about things unaffected by the belief itself — beliefs about tomorrow’s weather, or organic chemistry, or the likely behavior of strangers.
b) But there’s a different “belief-math” (or “believing-in math”) that’s relevant for coordinating pieces of oneself in order to take a complex action, and for coordinating multiple people so as to run a business or community other collaborative endeavor. I think I lay it out here (roughly — I don’t have all the math), and I think it matters.
The old LessWrong Sequences-reading crowd *sort of* knew about this — folks talked about how beliefs about matters directly affected by the beliefs could be self-fulfilling or self-undermining prophecies, and how Bayes-math wasn’t defined around here. But when I read those comments, I thought they were discussing an uninteresting edge case. The idioms by which we organize complex actions (within a person, and between people) are part of the bread and butter of how intelligence works; they are not an uninteresting edge case.
Likewise, people talked sometimes (on LW in the past) about they were intentionally holding false beliefs about their start-ups’ success odds; and they were advised not to be clever, and some commenters dissented from this advice. But IMO the “believing in” concept lets us distinguish:
(i) the useful thing such CEOs were up to (holding a target, in detail, that they and others can coordinate action around);
(ii) how to do this without having or requesting false predictions at the same time; and
(iii) how sometimes such action on the part of CEOs/etc is basically “lying” (and “demanding lies”), in the sense that it is designed to extract more work/investment/etc from “allies” than said allies would volunteer if they understood the process generating the CEOs behavior (and demand that their team members are similarly deceptive/extractive). And sometimes it’s not. And there are principles for telling the difference.
All of which is sort of to say that I think this model of “believing in” has substance we can use for the normal human business of planning actions together, and isn’t merely propaganda to mislead people into thinking human thinking bugs are less buggy than they are. Also I think it’s as true to the normal English usage of “believing in” as the historical LW usage of “belief” is to the normal English usage of “belief”.