If you are reading this, chances are that you strive to be rational.
In Game Theory, an agent is called “rational” if they “select the best response—the action which provides the maximum benefit—given the information available to them.”
This is aligned with Eliezer’s definition of instrumental rationality as “making decisions that help you win.”
But crucially, Eliezer distinguishes a second component of rationality, epistemic rationality—namely, forming true beliefs. Why is this important to “win”?
Very trivially, because more accurate beliefs—beliefs that better reflect reality- can help you make better decisions. Thus, one would generalise that “truth” is inevitably a net positive for an individual (see Litany of Gendlin).
But is this true? Excluding hedge cases when your knowledge of the truth itself dooms you to suffer—e.g., witnessing a murder, thus being murdered to be kept silent—is knowing the truth always a net positive?
(Note that here “benefit” and “win” are defined subjectively, based on your own preferences, whatever they might be. Also, we are using “truth” as a binary feature for simplicity, but in reality, beliefs should be graded on a spectrum of accuracy, not as 0 or 1).
The Value of Beliefs
We can do a bit better than just saying “true beliefs make for better decisions”. We can quantify it.
We can define the decision value of belief X as the total payoff of the actions that the agent will select, given their knowledge of X, minus the total payoff of the actions they would have taken under their previous/alternative belief.
In other words, how much of a difference does it make, in terms of the outcome of their decisions over their lifetime, whether they hold belief X or the next best alternative.
A few interesting insights can be derived from this definition:
Decision value depends on the rationality of the agent. A more rational agent can make better use of the information in their possession, thus increasing the value of their beliefs. In other words, knowledge is more powerful in the hands of someone who knows what to do with it.
The effective decision value—the actual delta in utility that we can record at the end of someone’s life—depends strongly on the circumstances: how often did you make use of that belief? How important were those decisions? How much did your decision impact the outcome?
If a belief does not change your decisions—or outcomes—at all, then it has decision value = 0.
The last one in particular is very consequential. We might be enamoured with our scientific theories and the equations of general relativity and quantum physics. But for most people, in their day-to-day, believing that gravity behaves according to general relativity, to Newtonian gravitation or to “heavy things fall down”, makes almost no difference.
This is—in my humble opinion—the main reason why, sadly, rationalists don’t systematically win. This is of course context dependent, but chances are that most of your scientific knowledge, most of your keep of the secrets of the universe, is—tragically—pretty much useless to you.
Now, to address some easy counterarguments:
Yes, clearly the decision value of a belief should include the value of the beliefs that can be inferred from the first one. Thus, if you give a hunter-gatherer knowledge of Archimedes’ Principle, they’ll be able to build boats and reap tremendous benefits. But if you give them knowledge of nuclear fission, they’ll be able to do absolutely nothing with it. It’ll be no more valuable than believing that things are made of “magic stones”.
Yes, the limitations of an individual are not those of a collective. Which is why true beliefs are enormously more valuable as a society than for an individual. A country is capable of leveraging true beliefs about nuclear engineering in a way that no individual can. But often it takes tremendous time and effort for true beliefs to go from individual truths to societal truths. (And at best, it makes “truth” a social optimum but not a dominant strategy for individuals.)
« Well, alright », you say, « you are showing that a bunch of true beliefs are quite useless, but this just sets the value of those beliefs to 0, not to a negative number. Thus, “truth” overall is still a net positive ».
Not so fast.
Knowledge for Knowledge Sake
We’ve talked about the decision value of beliefs—how much they help you make better decisions in your life. But is that all there is to knowledge? Not by a long shot.
Knowledge (the set of your beliefs) has, in fact, another type of value: intrinsic value. This is the value (payoff/benefit/happiness) that you derive directly from holding a particular belief.
When a high schooler thinks that their crush is in love with them, that simple belief sends them over the moon. In most cases, it will have minimal decision value, but the effect on their utility is hard to overstate.
So a true belief—even if useless—can make you happier (or whatever other dimension you optimise on), and thus it is valuable.
But it works both ways.
A false belief, even if it will—on average—have negative decision value when compared to a true belief, might have a sufficiently high intrinsic value to make the overall delta positive. Namely, having afalse beliefwill make you happier and better off.
Don’t believe me? Let’s look at an absolutely random example.
Gina believes in an omnipotent entity called Galactus. She believes that Galactus oversees the universe with its benevolence and directs the flows of events towards their destiny. Every morning, she wakes up with a smile and starts her day confident that whatever happens, Galactus has her back. But she doesn’t slouch! She works hard and tries to make the most intelligent decisions she can, according to her best understanding of the latest science. After all, Galactus is also the patron of science and intelligence!
Lucius doesn’t believe in any such thing. He is a stone-cold materialist and aspiritualist who spends his free time arguing online about the stupidity of those Galactus’ believers and how they will never understand the true nature of life and the universe. He also makes an effort to make the most intelligent decisions he can, trying to prove to the world that rationalists can win after all. But every day is a challenge, and every challenge is a reminder that life is but a continuous obstacle race, and you are running solo.
Note that with the exception of their differences about spirituality, Gina and Lucius have pretty much the same beliefs, and coincidentally very similar lives (they both aspire to live a “maximum happiness life”). You might wonder how Gina can reconcile her spiritual belief with her scientific knowledge, but she doesn’t have to. She is very happy never to run a “consistency check” between the two. To Lucius’ dismay.
« How can you not see how absurd this Galactus thing is?» he says, exasperated, as they share a cab to the airport.
« It doesn’t seem absurd to me » Gina answers with a smile.
Lucius starts wondering whether she actually believes in Galactus, or only believes that she believes in the deity. Wait, was that even possible? He can’t remember that part of the Sequences too well...he’ll have to reread them again. Can one even choose one’s own beliefs? Could he now decide to believe in Galactus? Probably not...if something doesn’t feel right, doesn’t feel “true”, if something doesn’t...fit with your world view, it’s just impossible to force it in. Well, maybe he can show Gina why her belief doesn’t fit? Wait, would that be immoral? After all, she seems so happy...
Unfortunately, Lucius doesn’t get to make that last decision. The cab driver is looking at his phone and doesn’t see that the car in front has stopped suddenly. In half a dozen seconds, everything is over.
So who “won”? Who has lived the “better” life? Gina or Lucius?
I think Gina won by a mile.
On a day-to-day basis, their decisions were practically identical, so the decision values of their beliefs in spirituality were virtually 0. Lucius worked very hard because he believed that in a world without “spirits” he was the only one he could count on. But Gina worked hard because she believed that that’s what a good Galactusean should do. Lucius believed in science and rationality because it’s the optimal decision strategy. Gina believes in them because it’s what Galactus recommends. Etc.
You might argue that in some obscure node of the graph, Lucius’ beliefs were inevitably more accurate and thus led to marginally better decisions. But even so, I think Gina has a huge, enormous, easily-winning asset on her side: optimism.
Every day, Gina woke up believing that things would be okay, that whatever happened, Galactus had a plan in mind for her. Galactus had her back.
Every day, Lucius woke up believing that he had to fight even harder than the day before, because whatever happened, he could only count on himself. No fairy godfather had his back.
“Happiness” (“Utility”, “Life satisfaction”) doesn’t depend only on what you feel and experience now. It also depends on what you expect for your future.
And when your “true” beliefs negatively affect your expectations, without a counteracting sufficient improvement in life outcomes, you might have been better off with false ones.
Whether you can, in fact, choose your beliefs is beyond the scope of this essay. But I leave you with the same question Lucius was asking:
Should he have really tried to show Gina that she was wrong, knowing what you know now?
In Defence of False Beliefs
If you are reading this, chances are that you strive to be rational.
In Game Theory, an agent is called “rational” if they “select the best response—the action which provides the maximum benefit—given the information available to them.”
This is aligned with Eliezer’s definition of instrumental rationality as “making decisions that help you win.”
But crucially, Eliezer distinguishes a second component of rationality, epistemic rationality—namely, forming true beliefs. Why is this important to “win”?
Very trivially, because more accurate beliefs—beliefs that better reflect reality- can help you make better decisions. Thus, one would generalise that “truth” is inevitably a net positive for an individual (see Litany of Gendlin).
But is this true? Excluding hedge cases when your knowledge of the truth itself dooms you to suffer—e.g., witnessing a murder, thus being murdered to be kept silent—is knowing the truth always a net positive?
(Note that here “benefit” and “win” are defined subjectively, based on your own preferences, whatever they might be. Also, we are using “truth” as a binary feature for simplicity, but in reality, beliefs should be graded on a spectrum of accuracy, not as 0 or 1).
The Value of Beliefs
We can do a bit better than just saying “true beliefs make for better decisions”. We can quantify it.
We can define the decision value of belief X as the total payoff of the actions that the agent will select, given their knowledge of X, minus the total payoff of the actions they would have taken under their previous/alternative belief.
In other words, how much of a difference does it make, in terms of the outcome of their decisions over their lifetime, whether they hold belief X or the next best alternative.
A few interesting insights can be derived from this definition:
Decision value depends on the rationality of the agent. A more rational agent can make better use of the information in their possession, thus increasing the value of their beliefs. In other words, knowledge is more powerful in the hands of someone who knows what to do with it.
The effective decision value—the actual delta in utility that we can record at the end of someone’s life—depends strongly on the circumstances: how often did you make use of that belief? How important were those decisions? How much did your decision impact the outcome?
If a belief does not change your decisions—or outcomes—at all, then it has decision value = 0.
The last one in particular is very consequential. We might be enamoured with our scientific theories and the equations of general relativity and quantum physics. But for most people, in their day-to-day, believing that gravity behaves according to general relativity, to Newtonian gravitation or to “heavy things fall down”, makes almost no difference.
This is—in my humble opinion—the main reason why, sadly, rationalists don’t systematically win. This is of course context dependent, but chances are that most of your scientific knowledge, most of your keep of the secrets of the universe, is—tragically—pretty much useless to you.
Now, to address some easy counterarguments:
Yes, clearly the decision value of a belief should include the value of the beliefs that can be inferred from the first one. Thus, if you give a hunter-gatherer knowledge of Archimedes’ Principle, they’ll be able to build boats and reap tremendous benefits. But if you give them knowledge of nuclear fission, they’ll be able to do absolutely nothing with it. It’ll be no more valuable than believing that things are made of “magic stones”.
Yes, the limitations of an individual are not those of a collective. Which is why true beliefs are enormously more valuable as a society than for an individual. A country is capable of leveraging true beliefs about nuclear engineering in a way that no individual can. But often it takes tremendous time and effort for true beliefs to go from individual truths to societal truths. (And at best, it makes “truth” a social optimum but not a dominant strategy for individuals.)
« Well, alright », you say, « you are showing that a bunch of true beliefs are quite useless, but this just sets the value of those beliefs to 0, not to a negative number. Thus, “truth” overall is still a net positive ».
Not so fast.
Knowledge for Knowledge Sake
We’ve talked about the decision value of beliefs—how much they help you make better decisions in your life. But is that all there is to knowledge? Not by a long shot.
Knowledge (the set of your beliefs) has, in fact, another type of value: intrinsic value. This is the value (payoff/benefit/happiness) that you derive directly from holding a particular belief.
When a high schooler thinks that their crush is in love with them, that simple belief sends them over the moon. In most cases, it will have minimal decision value, but the effect on their utility is hard to overstate.
So a true belief—even if useless—can make you happier (or whatever other dimension you optimise on), and thus it is valuable.
But it works both ways.
A false belief, even if it will—on average—have negative decision value when compared to a true belief, might have a sufficiently high intrinsic value to make the overall delta positive. Namely, having a false belief will make you happier and better off.
Don’t believe me? Let’s look at an absolutely random example.
Gina believes in an omnipotent entity called Galactus. She believes that Galactus oversees the universe with its benevolence and directs the flows of events towards their destiny. Every morning, she wakes up with a smile and starts her day confident that whatever happens, Galactus has her back. But she doesn’t slouch! She works hard and tries to make the most intelligent decisions she can, according to her best understanding of the latest science. After all, Galactus is also the patron of science and intelligence!
Lucius doesn’t believe in any such thing. He is a stone-cold materialist and aspiritualist who spends his free time arguing online about the stupidity of those Galactus’ believers and how they will never understand the true nature of life and the universe. He also makes an effort to make the most intelligent decisions he can, trying to prove to the world that rationalists can win after all. But every day is a challenge, and every challenge is a reminder that life is but a continuous obstacle race, and you are running solo.
Note that with the exception of their differences about spirituality, Gina and Lucius have pretty much the same beliefs, and coincidentally very similar lives (they both aspire to live a “maximum happiness life”). You might wonder how Gina can reconcile her spiritual belief with her scientific knowledge, but she doesn’t have to. She is very happy never to run a “consistency check” between the two. To Lucius’ dismay.
« How can you not see how absurd this Galactus thing is?» he says, exasperated, as they share a cab to the airport.
« It doesn’t seem absurd to me » Gina answers with a smile.
Lucius starts wondering whether she actually believes in Galactus, or only believes that she believes in the deity. Wait, was that even possible? He can’t remember that part of the Sequences too well...he’ll have to reread them again. Can one even choose one’s own beliefs? Could he now decide to believe in Galactus? Probably not...if something doesn’t feel right, doesn’t feel “true”, if something doesn’t...fit with your world view, it’s just impossible to force it in. Well, maybe he can show Gina why her belief doesn’t fit? Wait, would that be immoral? After all, she seems so happy...
Unfortunately, Lucius doesn’t get to make that last decision. The cab driver is looking at his phone and doesn’t see that the car in front has stopped suddenly. In half a dozen seconds, everything is over.
So who “won”? Who has lived the “better” life? Gina or Lucius?
I think Gina won by a mile.
On a day-to-day basis, their decisions were practically identical, so the decision values of their beliefs in spirituality were virtually 0. Lucius worked very hard because he believed that in a world without “spirits” he was the only one he could count on. But Gina worked hard because she believed that that’s what a good Galactusean should do. Lucius believed in science and rationality because it’s the optimal decision strategy. Gina believes in them because it’s what Galactus recommends. Etc.
You might argue that in some obscure node of the graph, Lucius’ beliefs were inevitably more accurate and thus led to marginally better decisions. But even so, I think Gina has a huge, enormous, easily-winning asset on her side: optimism.
Every day, Gina woke up believing that things would be okay, that whatever happened, Galactus had a plan in mind for her. Galactus had her back.
Every day, Lucius woke up believing that he had to fight even harder than the day before, because whatever happened, he could only count on himself. No fairy godfather had his back.
“Happiness” (“Utility”, “Life satisfaction”) doesn’t depend only on what you feel and experience now. It also depends on what you expect for your future.
And when your “true” beliefs negatively affect your expectations, without a counteracting sufficient improvement in life outcomes, you might have been better off with false ones.
Whether you can, in fact, choose your beliefs is beyond the scope of this essay. But I leave you with the same question Lucius was asking:
Should he have really tried to show Gina that she was wrong, knowing what you know now?