Claim 1: “Be wrong.” Articulating your models and implied beliefs about the world is an important step in improving your understanding. The simple act of explicitly constraining your anticipations so that you’ll be able to tell if you’re wrong will lead to updating your beliefs in response to evidence.
If you want to discuss this claim, I encourage you to do it as a reply to this comment.
I’m not sure exactly what you meant, so not ultimately sure whether I disagree, but I at least felt uncomfortable with this claim.
I think it’s because:
Your framing pushes towards holding beliefs rather than credences in the sense used here.
I think it’s generally inappropriate to hold beliefs about the type of things that are important and you’re likely to turn out to be wrong on. (Of course for boundedly rational agents it’s acceptable to hold beliefs about some things as a time/attention-saving matter.)
It’s normally right to update credences gradually as more evidence comes in. There isn’t so much an “I was wrong” moment.
On the other hand I do support generating explicit hypotheses, and articulating concrete models.
I think this clarifies an important area of disagreement:
I claim that there are lots of areas where people have implicit strong beliefs, and it’s important to make those explicit to double-check. Credences are important for any remaining ambiguity, but for cognitive efficiency, you should partition off as much as you can as binary beliefs first, so you can do inference on them—and change your mind when your assumptions turn out to be obviously wrong. This might not be particularly salient to you because you’re already very good at this in many domains.
This is what I was trying to do with my series of blog posts on GiveWell, for instance—partition off some parts of my beliefs as a disjunction I could be confident enough in to think about it as a set of beliefs I could reason logically about. (For instance, Good Ventures either has increasing returns to scale, or diminishing, or constant, at its given endowment.) What remains is substantial uncertainty about which branch of the disjunction we’re in, and that should be parsed as a credence—but scenario analysis requires crisp scenarios, or at least crisp axes to simulate variation along.
Another way of saying this is that from many epistemic starting points it’s not even worth figuring out where you are in credence-space on the uncertain parts, because examining your comparatively certain premises will lead to corrections that fundamentally alter your credence-space.
I think I’d still endorse a bit more of a push towards thinking in credences (where you’re at a threshold of that being a reasonable thing to do), but I’ll consider further.
I’m all about epistemology. (my blog is at pancrit.org) But in order to engage in or start a conversation, it’s important to take one of the things you place credence in and advocate for it. If you’re wishy-washy, in many circumstances, people won’t actually engage with your hypothesis, so you won’t learn anything about it. Take a stand, even if you’re on slippery ground.
Per my reply to Owen, I think fine to say “X% A, 100-X% not-A” as a way to start a discussion, and even to be fuzzy about the %, but it’s then important to be pretty clear about the structure of A and not-A, and to have some clear “A OR not-A” belief, and beliefs about what it looks like if A is true vs false.
I think I’ve gotten a lot out of trying to make my confusions public, because a lot of the time when I’m confused the source is also confused, and if not then I get my confusion resolved quickly instead of slowly.
I typically hesitate before recommending this to other people, because I don’t know what the cost/benefit ratio looks like at difficult base rates of confusion. If you actually are wrong nine times out of ten, what does being open about this look like?
There is a trade-off because people (at least those who are not spherical cows in vacuum) tend to stick to initial positions. So if you picked a model/belief early, you might find abandoning it difficult. Plus there is the whole feedback loop where having an existing belief will affect which evidence you choose to see.
So it’s not the case of “the more the better”, declaring a flimsily-supported position has its costs which have to be taken into account. In practice I tend to delay articulating my models until I need to—there is no point in formulating a position when it will not affect anything.
I think this objection is comparatively strong for non-operationalized “shoulds,” post-dictions, and highly general statements that haven’t yet been cashed out to specific anticipations. I think that there’s very little harm in making a model more explicit, and a lot of benefit, when you get all they way down to expecting to observe specific things in a hard-to-mistake way.
That’s a separate skill that’s needed in order to make this advice beneficial, and it’s important to keep in mind the overall skill tree, so thanks for bringing up this issue.
One problem with this is that we are not perfectly rational beings.
If you don’t think you have enough evidence yet to form an opinion on something, it may be better to hold off and not form an opinion yet. Because once you form an opinion, it will inherently bias the way you precieve new information you get, and even the most rational of us tend to be biased towards wanting to think that our current opinion is already right.
One correlary is that even when you don’t have enough evidence to form an opinion, you can create and start to test a hypothesis for yourself without actually deciding (even in secret) that it is “your opinion”. That way you can get the advantages you are talking about without precomittimg yourself to something that might bias you.
I feel like this goes hand in hand with epistemic humility (often showcased by Yvain) that I try to emulate. If you expose the inner wiring of your argument, including where you’re uncertain, and literally invite people to shoot holes in it, you’re going to learn a lot faster.
You could argue that Slate Star Codex is one big exercise in “Here are some thoughts I’m mildly/moderately/completely sure about, tell me what’s wrong with them and I’ll actually pay attention to your feedback.”
Claim 1: “Be wrong.” Articulating your models and implied beliefs about the world is an important step in improving your understanding. The simple act of explicitly constraining your anticipations so that you’ll be able to tell if you’re wrong will lead to updating your beliefs in response to evidence.
If you want to discuss this claim, I encourage you to do it as a reply to this comment.
I’m not sure exactly what you meant, so not ultimately sure whether I disagree, but I at least felt uncomfortable with this claim.
I think it’s because:
Your framing pushes towards holding beliefs rather than credences in the sense used here.
I think it’s generally inappropriate to hold beliefs about the type of things that are important and you’re likely to turn out to be wrong on. (Of course for boundedly rational agents it’s acceptable to hold beliefs about some things as a time/attention-saving matter.)
It’s normally right to update credences gradually as more evidence comes in. There isn’t so much an “I was wrong” moment.
On the other hand I do support generating explicit hypotheses, and articulating concrete models.
I think this clarifies an important area of disagreement:
I claim that there are lots of areas where people have implicit strong beliefs, and it’s important to make those explicit to double-check. Credences are important for any remaining ambiguity, but for cognitive efficiency, you should partition off as much as you can as binary beliefs first, so you can do inference on them—and change your mind when your assumptions turn out to be obviously wrong. This might not be particularly salient to you because you’re already very good at this in many domains.
This is what I was trying to do with my series of blog posts on GiveWell, for instance—partition off some parts of my beliefs as a disjunction I could be confident enough in to think about it as a set of beliefs I could reason logically about. (For instance, Good Ventures either has increasing returns to scale, or diminishing, or constant, at its given endowment.) What remains is substantial uncertainty about which branch of the disjunction we’re in, and that should be parsed as a credence—but scenario analysis requires crisp scenarios, or at least crisp axes to simulate variation along.
Another way of saying this is that from many epistemic starting points it’s not even worth figuring out where you are in credence-space on the uncertain parts, because examining your comparatively certain premises will lead to corrections that fundamentally alter your credence-space.
This was helpful to me, thanks.
I think I’d still endorse a bit more of a push towards thinking in credences (where you’re at a threshold of that being a reasonable thing to do), but I’ll consider further.
I’m all about epistemology. (my blog is at pancrit.org) But in order to engage in or start a conversation, it’s important to take one of the things you place credence in and advocate for it. If you’re wishy-washy, in many circumstances, people won’t actually engage with your hypothesis, so you won’t learn anything about it. Take a stand, even if you’re on slippery ground.
Per my reply to Owen, I think fine to say “X% A, 100-X% not-A” as a way to start a discussion, and even to be fuzzy about the %, but it’s then important to be pretty clear about the structure of A and not-A, and to have some clear “A OR not-A” belief, and beliefs about what it looks like if A is true vs false.
I think I’ve gotten a lot out of trying to make my confusions public, because a lot of the time when I’m confused the source is also confused, and if not then I get my confusion resolved quickly instead of slowly.
I typically hesitate before recommending this to other people, because I don’t know what the cost/benefit ratio looks like at difficult base rates of confusion. If you actually are wrong nine times out of ten, what does being open about this look like?
There is a trade-off because people (at least those who are not spherical cows in vacuum) tend to stick to initial positions. So if you picked a model/belief early, you might find abandoning it difficult. Plus there is the whole feedback loop where having an existing belief will affect which evidence you choose to see.
So it’s not the case of “the more the better”, declaring a flimsily-supported position has its costs which have to be taken into account. In practice I tend to delay articulating my models until I need to—there is no point in formulating a position when it will not affect anything.
I think this objection is comparatively strong for non-operationalized “shoulds,” post-dictions, and highly general statements that haven’t yet been cashed out to specific anticipations. I think that there’s very little harm in making a model more explicit, and a lot of benefit, when you get all they way down to expecting to observe specific things in a hard-to-mistake way.
That’s a separate skill that’s needed in order to make this advice beneficial, and it’s important to keep in mind the overall skill tree, so thanks for bringing up this issue.
One problem with this is that we are not perfectly rational beings.
If you don’t think you have enough evidence yet to form an opinion on something, it may be better to hold off and not form an opinion yet. Because once you form an opinion, it will inherently bias the way you precieve new information you get, and even the most rational of us tend to be biased towards wanting to think that our current opinion is already right.
One correlary is that even when you don’t have enough evidence to form an opinion, you can create and start to test a hypothesis for yourself without actually deciding (even in secret) that it is “your opinion”. That way you can get the advantages you are talking about without precomittimg yourself to something that might bias you.
I feel like this goes hand in hand with epistemic humility (often showcased by Yvain) that I try to emulate. If you expose the inner wiring of your argument, including where you’re uncertain, and literally invite people to shoot holes in it, you’re going to learn a lot faster.
You could argue that Slate Star Codex is one big exercise in “Here are some thoughts I’m mildly/moderately/completely sure about, tell me what’s wrong with them and I’ll actually pay attention to your feedback.”