As I understand the post, its idea is that a rationalist should never “start with a bottom line and then fill out the arguments”.
Ooh! I see a “should” statement. Let’s open it up and see what’s inside!
\gets out the consequentialist box-cutter**
… its idea is that we will get worse consequences if we “start with a bottom line and then fill out the arguments.”
Hmm. Is that what “The Bottom Line” says?
Let’s take a look at what it says about some actual consequences:
If your car makes metallic squealing noises when you brake, and you aren’t willing to face up to the financial cost of getting your brakes replaced, you can decide to look for reasons why your car might not need fixing. But the actual percentage of you that survive in Everett branches or Tegmark worlds—which we will take to describe your effectiveness as a rationalist—is determined by the algorithm that decided which conclusion you would seek arguments for. In this case, the real algorithm is “Never repair anything expensive.” If this is a good algorithm, fine; if this is a bad algorithm, oh well.
In other words, it’s not like you’re sinning or cheating at the rationality game if you write the bottom line first.
Rather, you ran some algorithm to generate that bottom line. You selected that bottom line out of hypothesis-space somehow. Perhaps you used the availability heuristic. Perhaps you used some ugh fields. Perhaps you used physics. Perhaps you used Tristan Tzara’s cut-up technique. Or a Ouija board. Or following whatever your car’s drivers’ manual said to do. Or your mom’s caring advice.
Well, how good was that algorithm?
Do people who use that algorithm tend to get good consequences, or not?
Once your bottom line is written — once you have made the decision whether or not to fix your brakes — the consequences you experience don’t depend on any clever arguments you made up to justify that decision retrospectively.
If you come up with a candidate “bottom line” and then explore arguments for and against it, and sometimes end up rejecting it, then it wasn’t really a bottom line — your algorithm hadn’t actually terminated. We can then ask, still, how good is your algorithm, including the exploring and maybe-rejecting? This is where questions about motivated stopping and continuation come in.
If you come up with a candidate “bottom line” and then explore arguments for and against it, and sometimes end up rejecting it, then it wasn’t really a bottom line — your algorithm hadn’t actually terminated.
Oh. That makes sense. So it’s the bottom line only if I write it and refuse to change it forever after. Or, if it is the belief on which I actually act in the end, if it was all a part of a decision-making process.
Guess that’s what everybody was telling me… feeling stupid now.
Your real decision is the one you act on. Decision theory, after all, isn’t about what the agent believes it has decided; it’s about actions the agent chooses.
Edited to add: Also, you recognized where “the biases really struck” as you put it — that’s a pretty important part. It seems to me that one reason to resist writing even a tentative bottom line too early is to avoid motivated stopping. And if you’re working in a group, this is a reason to hold off on proposing solutions.
Edited again to add: In retrospect I’m not sure, but I think what I triggered on, that led me to respond to your post was the phrase “a rationalist should”. This fits the same grammatical pattern as “a libertarian should”, “a Muslim should”, and so on … as if rationality were another ideological identity; that one identifies with Rationalist-ism first and then follows the rationality social rules, having faith that by being a good rationalist one gets to go to rationalist heaven and receive 3^^^3 utilons (and no dust specks), or some such.
I expect that’s not what you actually meant. But I think I sometimes pounce on that kind of thing. Gotta fight the cult attractor! I figure David Gerard has the “keeping LW from becoming Scientology” angle; I’ll try for the “keeping LW from becoming Objectivism” angle. :)
Ooh! I see a “should” statement. Let’s open it up and see what’s inside!
\gets out the consequentialist box-cutter**
Hmm. Is that what “The Bottom Line” says?
Let’s take a look at what it says about some actual consequences:
In other words, it’s not like you’re sinning or cheating at the rationality game if you write the bottom line first.
Rather, you ran some algorithm to generate that bottom line. You selected that bottom line out of hypothesis-space somehow. Perhaps you used the availability heuristic. Perhaps you used some ugh fields. Perhaps you used physics. Perhaps you used Tristan Tzara’s cut-up technique. Or a Ouija board. Or following whatever your car’s drivers’ manual said to do. Or your mom’s caring advice.
Well, how good was that algorithm?
Do people who use that algorithm tend to get good consequences, or not?
Once your bottom line is written — once you have made the decision whether or not to fix your brakes — the consequences you experience don’t depend on any clever arguments you made up to justify that decision retrospectively.
If you come up with a candidate “bottom line” and then explore arguments for and against it, and sometimes end up rejecting it, then it wasn’t really a bottom line — your algorithm hadn’t actually terminated. We can then ask, still, how good is your algorithm, including the exploring and maybe-rejecting? This is where questions about motivated stopping and continuation come in.
I like your comment-generating algorithm.
Oh. That makes sense. So it’s the bottom line only if I write it and refuse to change it forever after. Or, if it is the belief on which I actually act in the end, if it was all a part of a decision-making process.
Guess that’s what everybody was telling me… feeling stupid now.
’s all good.
Your real decision is the one you act on. Decision theory, after all, isn’t about what the agent believes it has decided; it’s about actions the agent chooses.
Edited to add:
Also, you recognized where “the biases really struck” as you put it — that’s a pretty important part. It seems to me that one reason to resist writing even a tentative bottom line too early is to avoid motivated stopping. And if you’re working in a group, this is a reason to hold off on proposing solutions.
Edited again to add:
In retrospect I’m not sure, but I think what I triggered on, that led me to respond to your post was the phrase “a rationalist should”. This fits the same grammatical pattern as “a libertarian should”, “a Muslim should”, and so on … as if rationality were another ideological identity; that one identifies with Rationalist-ism first and then follows the rationality social rules, having faith that by being a good rationalist one gets to go to rationalist heaven and receive 3^^^3 utilons (and no dust specks), or some such.
I expect that’s not what you actually meant. But I think I sometimes pounce on that kind of thing. Gotta fight the cult attractor! I figure David Gerard has the “keeping LW from becoming Scientology” angle; I’ll try for the “keeping LW from becoming Objectivism” angle. :)
Feeling stupid means you’re getting smarter. At least, that’s what I tell myself whenever I feel that past-me did something stupid.