I think there is “machinery that underlies counterfactual reasoning”
I agree that counterfactual reasoning is contingent on certain brain structures, but I would say the same about logic as well and it’s clear that the logic of a kindergartener is very different from that of a logic professor—although perhaps we’re getting into a semantic debate—and what you mean is that the fundamental machinery is more or less the same.
I was initially assuming (by default) that if you’re trying to understand counterfactuals, you’re mainly trying to understand how this machinery works. But I’m increasingly confident that I was wrong, and that’s not in fact what you’re interested in. Instead it seems that your interests are more like “how would an AI, equipped with this kind of machinery, reach correct conclusions about the world?”
Yeah, this seems accurate. I see understanding the machinery as the first step towards the goal of learning to counterfactually reason well. As an analogy, suppose you’re trying to learn how to reason well. It might make sense to figure out how humans reason, but if you want to build a better reasoning machine and not just duplicate human performance, you’d want to be able to identify some of these processes as good reasoning and some as biases.
I don’t think there’s a clean separation between “good counterfactual reasoning” and “good reasoning in general”
I guess I don’t see why there would need to be a separation in order for the research direction I’ve suggested to be insightful. In fact, if there isn’t a separation, this direction could even be more fruitful as it could lead to rather general results.
If I say some counterfactual nonsense like “If the Earth were a flat disk, then the north pole would be in the center,” I think the reason it’s nonsense lives at the object-level, i.e. the detailed content of the thought in the context of everything else we know about the world
I would say (as a slight simplification) that our goal in studying counterfactual reasoning should be to get counterfactuals to a point where we can answer questions about them using our normal reasoning.
I think that “what counterfactuals make sense in the context of decision-making” is a decision theory question, not a counterfactuals question, and I expect a good answer to look like explicit discussions of decision theory as opposed to looking like a more general discussion of the philosophical nature of counterfactuals
That post certainly seems to contain an awful lot of philosophy to me. And I guess even though this post and my post On the Nature of Counterfactuals don’t make any reference to decision theory, that doesn’t mean that it isn’t in the background influencing what I write. I’ve written a lot of posts here, many of which discuss specific decision theory questions.
I guess I would still consider Joe Carlsmith’s post a high-quality post if it had focused exclusively on the more philosophical aspects. And I guess philosophical arguments are harder to evaluate than mathematical ones and it can be disconcerting for some people, especially those used to the certainty of mathematics, but I believe it’s possible to get to the level where you can avoid formalisation things a lot of the time because you have enough experience to know how things will shake out.
Although I suppose in this case my reason for avoiding formalisation is that I see premature formalisation as a critical error. Once someone has produced a formal theory they will feel psychologically compelled to defend it, especially if it mathematically beautiful, so I believe it’s important to be very careful about making sure the assumptions are right before attempting to formalise anything.
I agree that counterfactual reasoning is contingent on certain brain structures, but I would say the same about logic as well and it’s clear that the logic of a kindergartener is very different from that of a logic professor—although perhaps we’re getting into a semantic debate—and what you mean is that the fundamental machinery is more or less the same.
Yeah, this seems accurate. I see understanding the machinery as the first step towards the goal of learning to counterfactually reason well. As an analogy, suppose you’re trying to learn how to reason well. It might make sense to figure out how humans reason, but if you want to build a better reasoning machine and not just duplicate human performance, you’d want to be able to identify some of these processes as good reasoning and some as biases.
I guess I don’t see why there would need to be a separation in order for the research direction I’ve suggested to be insightful. In fact, if there isn’t a separation, this direction could even be more fruitful as it could lead to rather general results.
I would say (as a slight simplification) that our goal in studying counterfactual reasoning should be to get counterfactuals to a point where we can answer questions about them using our normal reasoning.
That post certainly seems to contain an awful lot of philosophy to me. And I guess even though this post and my post On the Nature of Counterfactuals don’t make any reference to decision theory, that doesn’t mean that it isn’t in the background influencing what I write. I’ve written a lot of posts here, many of which discuss specific decision theory questions.
I guess I would still consider Joe Carlsmith’s post a high-quality post if it had focused exclusively on the more philosophical aspects. And I guess philosophical arguments are harder to evaluate than mathematical ones and it can be disconcerting for some people, especially those used to the certainty of mathematics, but I believe it’s possible to get to the level where you can avoid formalisation things a lot of the time because you have enough experience to know how things will shake out.
Although I suppose in this case my reason for avoiding formalisation is that I see premature formalisation as a critical error. Once someone has produced a formal theory they will feel psychologically compelled to defend it, especially if it mathematically beautiful, so I believe it’s important to be very careful about making sure the assumptions are right before attempting to formalise anything.