Interesting take on things. I think I’d want a more specific definition of “rationality” to really debate, but I’ll make a few counter-arguments that the study and practice of rationality can improve one’s capability to choose and achieve desirable outcomes.
“Doing good things and avoiding mistakes” doesn’t really match my model, but let’s run with it. I’ll even grant that achieving this by luck (including the luck of having the right personality traits and being inexplicably drawn to the good things) is probably just as good as doing it by choice. I do _NOT_ grant that it happens by luck with the same probability as by choice (or by choice + luck). Some effort spent in determining which things are good, and in which things lead to more opportunity for good is going to be rewarded (statistically) with better outcomes.
The question you don’t ask, but should, is “what does rationality cost, and in what cases is the cost higher than the benefit”? I’ll grant that this set may be non-zero.
I’ll also wave at the recursion problem: “when is rationality useful” is a fundamentally rationalist question.
Some effort spent in determining which things are good, and in which things lead to more opportunity for good is going to be rewarded (statistically) with better outcomes.
All else equal, do you think a rationalist mathematician will become more successful in their field than a non-rationalist mathematician? My guess is that if they spent the (fairly significant) time taken to learn and do rationalist things on just learning more maths, they’d do better.
(Here I’m ignoring the possibility that learning rationality makes them decide to leave the field).
I’ll also wave at your wave at the recursion problem: “when is rationality useful” is a fundamentally rationalist question both in the sense of being philosophical, and in the sense that answering it is probably not very useful for actually improving your work in most fields.
My guess is that if they spent the (fairly significant) time taken to learn and do rationalist things on just learning more maths, they’d do better.
I would take a bet against that, and do think that studying top mathematicians roughly confirms that. My model is that many of the top mathematicians have very explicitly invested significant resources into metacognitive skills, and reflected a lot on the epistemology and methodology behind mathematical proofs.
The problem for resolving the bet would likely be what we define as “rationality” here, but I would say that someone who has written or thought explicitly for a significant fraction of their time about questions like “what kind of evidence is compelling to me?” and “what kind of cognitive strategies that I have tend to reliably cause me to make mistakes?” and “what concrete drills and practice exercises can I design to get better at deriving true conclusions from true premises?” would count as “studying rationality”.
All else equal, do you think a rationalist mathematician will become more successful in their field than a non-rationalist mathematician?
This post by Jacob Steinhardt seems relevant: it’s a sequence of models of research, and describes what good research strategies look like in them. He says, of the final model:
Before implementing this approach, I made little research progress for over a year; afterwards, I completed one project every four months on average. Other changes also contributed, but I expect the ideas here to at least double your productivity if you aren’t already employing a similar process.
Interesting take on things. I think I’d want a more specific definition of “rationality” to really debate, but I’ll make a few counter-arguments that the study and practice of rationality can improve one’s capability to choose and achieve desirable outcomes.
“Doing good things and avoiding mistakes” doesn’t really match my model, but let’s run with it. I’ll even grant that achieving this by luck (including the luck of having the right personality traits and being inexplicably drawn to the good things) is probably just as good as doing it by choice. I do _NOT_ grant that it happens by luck with the same probability as by choice (or by choice + luck). Some effort spent in determining which things are good, and in which things lead to more opportunity for good is going to be rewarded (statistically) with better outcomes.
The question you don’t ask, but should, is “what does rationality cost, and in what cases is the cost higher than the benefit”? I’ll grant that this set may be non-zero.
I’ll also wave at the recursion problem: “when is rationality useful” is a fundamentally rationalist question.
All else equal, do you think a rationalist mathematician will become more successful in their field than a non-rationalist mathematician? My guess is that if they spent the (fairly significant) time taken to learn and do rationalist things on just learning more maths, they’d do better.
(Here I’m ignoring the possibility that learning rationality makes them decide to leave the field).
I’ll also wave at your wave at the recursion problem: “when is rationality useful” is a fundamentally rationalist question both in the sense of being philosophical, and in the sense that answering it is probably not very useful for actually improving your work in most fields.
I would take a bet against that, and do think that studying top mathematicians roughly confirms that. My model is that many of the top mathematicians have very explicitly invested significant resources into metacognitive skills, and reflected a lot on the epistemology and methodology behind mathematical proofs.
The problem for resolving the bet would likely be what we define as “rationality” here, but I would say that someone who has written or thought explicitly for a significant fraction of their time about questions like “what kind of evidence is compelling to me?” and “what kind of cognitive strategies that I have tend to reliably cause me to make mistakes?” and “what concrete drills and practice exercises can I design to get better at deriving true conclusions from true premises?” would count as “studying rationality”.
This post by Jacob Steinhardt seems relevant: it’s a sequence of models of research, and describes what good research strategies look like in them. He says, of the final model: