Is that a problem? What’s wrong with “believing true things”, or, more precisely, “winning bets”?
Winning bets is not literally the same thing as believing true things, nor is it the same thing as having accurate beliefs, or being rational. I think Dutch-book arguments are … not exactly mistaken, but misleading, for this reason. It is not true that the only reason to have probabilistically coherent beliefs is to avoid reliably losing bets. If that were the case, we could throw rationality out the window whenever bets aren’t involved. I think betting is both a helpful illustrative thought experiment (dutch books illustrate irrationality) and a helpful tool for practicing rationality, but not synonymous with rationality.
“Believing true things” is problematic for several reasons. First, that is apparently entirely focused on epistemic rationality, excluding instrumental rationality. Second, although there are many practical cases where it isn’t a problem, there is a question of what “true” means, especially for high-level beliefs about things like tables and chairs which are more like conceptual clusters rather than objective realities. Third, even setting those aside, it is hard to see how we can get from “believing true things” to Bayes’ Law and other rules of probabilistic reasoning. I would argue that a solid connection between “believe true things” and rationality constraints of classical logic can be made, but probabilistic reasoning requires an additional insight about what kind of thing can be a rationality constraint: you don’t just have beliefs, you have degrees of belief. We can say things about why degrees of belief might be better or worse, but to do so requires a notion of quality of belief which goes beyond truth alone; you are not maximizing the expected amount of truth or anything like that.
Another possible answer, which you didn’t name but which might have been named, would be “rationality is about winning”. Something important is meant by this, but the idea is still vague—it helps point toward things that do look like potential rationality constraints and away from things which can’t serve as rationality constraints, but it is not the end of the story of what we might possibly mean by calling something a constraint of rationality.
My intuition says “yes” in large part due to the word “humans”. I’m not certain whether two perfect bayesians should disagree, for some unrealistic sense of “perfect”, but even if they shouldn’t, it is uncertain that this would also apply to more limited agents.
Most of my probability mass is on you being right here, but I find RH’s arguments to the contrary intriguing. It’s not so much that I’m engaging with them in the expectation that I’ll change my mind about whether honest truth seeking humans can knowingly disagree. (Actually I think I should have said “can” all along rather than “should” now that I think about it more!) I do, however, expect something about the structure of those disagreements can be understood more thoroughly. If ideal Bayesians always agree, that could mean understanding the ways that Bayesian assumptions break down for humans. If ideal Bayesians need not agree, it might mean understanding that better.
I really want to see how the idea of pre-rationality is supposed to help me believe more true things and win more bets. I legitimately don’t understand how it would.
I think I can understand this one, to some extent. Supposing that some version of pre-rationality does work out, and if I, hypothetically, understood pre-rationality extremely well (better than RH’s paper explains it)… I would expect more insights into at least one of the following:
agents reasoning about their own prior (what is the structure of the reasoning? to what extent can an agent approve of, or not approve of, its own prior? are there things which can make an agent decide its own prior is bad? what must an agent believe about the process which created its prior? What should an agent do if it discovers that the process which created its prior was biased, or systematically not truth-seeking, or otherwise ‘problematic’?)
common knowledge of beliefs (is it realistic for beliefs to be common knowledge? when? are there more things to say about the structure of common knowledge, which help reconcile the usual assumption that an agent knows its own prior with the paradoxes of self-reference which prevent agents from knowing themselves so well?)
what it means for an agent to have a prior (how to we designate a special belief-state to call the prior, for realistic agents? can we do so at all in the face of logical uncertainty? is it better to just think in terms of a sequence of belief states, with some being relatively prior to others? can we make good models of agents who are becoming rational as they are learning, such that they lack an initial perfectly rational prior?)
reaching agreement with other agents (by an Aumann-like process or otherwise; by bringing in origin disputes or otherwise)
reasoning about one’s own origins (especially in the sense of justification structures; endorsing or not endorsing the way one’s beliefs were constructed or the way those beliefs became what they are more generally).
Winning bets is not literally the same thing as believing true things, nor is it the same thing as having accurate beliefs, or being rational.
They are not the same, but that’s ok. You asked about constraints on, not definitions of rationality. This may not be an exhaustive list, but if someone has an idea about rationality that translates neither into winning some hypothetical bets, nor into having even slightly more accurate beliefs about anything, then I can confidently say that I’m not interested.
(Of course this is not to say that an idea that has no such applications has literally zero value)
Supposing that some version of pre-rationality does work out, and if I, hypothetically, understood pre-rationality extremely well (better than RH’s paper explains it)… I would expect more insights into at least one of the following: <...>
I completely agree that if RH was right, and if you understood him well, then you would receive multiple benefits, most of which could translate into winning hypothetical bets, and into having more accurate beliefs about many things. But that’s just the usual effect of learning, and not because you would satisfy the pre-rationality condition.
I continue to not understand in what precise way the agent that satisfies the pre-rationality condition is (claimed to be) superior to the agent that doesn’t. To be fair, this could be a hard question, and even if we don’t immediately see the benefit, that doesn’t mean that there is no benefit. But still, I’m quite suspicious. In my view this is the single most important question, and it’s weird to me that I don’t see it explicitly addressed.
Winning bets is not literally the same thing as believing true things, nor is it the same thing as having accurate beliefs, or being rational. I think Dutch-book arguments are … not exactly mistaken, but misleading, for this reason. It is not true that the only reason to have probabilistically coherent beliefs is to avoid reliably losing bets. If that were the case, we could throw rationality out the window whenever bets aren’t involved. I think betting is both a helpful illustrative thought experiment (dutch books illustrate irrationality) and a helpful tool for practicing rationality, but not synonymous with rationality.
“Believing true things” is problematic for several reasons. First, that is apparently entirely focused on epistemic rationality, excluding instrumental rationality. Second, although there are many practical cases where it isn’t a problem, there is a question of what “true” means, especially for high-level beliefs about things like tables and chairs which are more like conceptual clusters rather than objective realities. Third, even setting those aside, it is hard to see how we can get from “believing true things” to Bayes’ Law and other rules of probabilistic reasoning. I would argue that a solid connection between “believe true things” and rationality constraints of classical logic can be made, but probabilistic reasoning requires an additional insight about what kind of thing can be a rationality constraint: you don’t just have beliefs, you have degrees of belief. We can say things about why degrees of belief might be better or worse, but to do so requires a notion of quality of belief which goes beyond truth alone; you are not maximizing the expected amount of truth or anything like that.
Another possible answer, which you didn’t name but which might have been named, would be “rationality is about winning”. Something important is meant by this, but the idea is still vague—it helps point toward things that do look like potential rationality constraints and away from things which can’t serve as rationality constraints, but it is not the end of the story of what we might possibly mean by calling something a constraint of rationality.
Most of my probability mass is on you being right here, but I find RH’s arguments to the contrary intriguing. It’s not so much that I’m engaging with them in the expectation that I’ll change my mind about whether honest truth seeking humans can knowingly disagree. (Actually I think I should have said “can” all along rather than “should” now that I think about it more!) I do, however, expect something about the structure of those disagreements can be understood more thoroughly. If ideal Bayesians always agree, that could mean understanding the ways that Bayesian assumptions break down for humans. If ideal Bayesians need not agree, it might mean understanding that better.
I think I can understand this one, to some extent. Supposing that some version of pre-rationality does work out, and if I, hypothetically, understood pre-rationality extremely well (better than RH’s paper explains it)… I would expect more insights into at least one of the following:
agents reasoning about their own prior (what is the structure of the reasoning? to what extent can an agent approve of, or not approve of, its own prior? are there things which can make an agent decide its own prior is bad? what must an agent believe about the process which created its prior? What should an agent do if it discovers that the process which created its prior was biased, or systematically not truth-seeking, or otherwise ‘problematic’?)
common knowledge of beliefs (is it realistic for beliefs to be common knowledge? when? are there more things to say about the structure of common knowledge, which help reconcile the usual assumption that an agent knows its own prior with the paradoxes of self-reference which prevent agents from knowing themselves so well?)
what it means for an agent to have a prior (how to we designate a special belief-state to call the prior, for realistic agents? can we do so at all in the face of logical uncertainty? is it better to just think in terms of a sequence of belief states, with some being relatively prior to others? can we make good models of agents who are becoming rational as they are learning, such that they lack an initial perfectly rational prior?)
reaching agreement with other agents (by an Aumann-like process or otherwise; by bringing in origin disputes or otherwise)
reasoning about one’s own origins (especially in the sense of justification structures; endorsing or not endorsing the way one’s beliefs were constructed or the way those beliefs became what they are more generally).
They are not the same, but that’s ok. You asked about constraints on, not definitions of rationality. This may not be an exhaustive list, but if someone has an idea about rationality that translates neither into winning some hypothetical bets, nor into having even slightly more accurate beliefs about anything, then I can confidently say that I’m not interested.
(Of course this is not to say that an idea that has no such applications has literally zero value)
I completely agree that if RH was right, and if you understood him well, then you would receive multiple benefits, most of which could translate into winning hypothetical bets, and into having more accurate beliefs about many things. But that’s just the usual effect of learning, and not because you would satisfy the pre-rationality condition.
I continue to not understand in what precise way the agent that satisfies the pre-rationality condition is (claimed to be) superior to the agent that doesn’t. To be fair, this could be a hard question, and even if we don’t immediately see the benefit, that doesn’t mean that there is no benefit. But still, I’m quite suspicious. In my view this is the single most important question, and it’s weird to me that I don’t see it explicitly addressed.