Some effort spent in determining which things are good, and in which things lead to more opportunity for good is going to be rewarded (statistically) with better outcomes.
All else equal, do you think a rationalist mathematician will become more successful in their field than a non-rationalist mathematician? My guess is that if they spent the (fairly significant) time taken to learn and do rationalist things on just learning more maths, they’d do better.
(Here I’m ignoring the possibility that learning rationality makes them decide to leave the field).
I’ll also wave at your wave at the recursion problem: “when is rationality useful” is a fundamentally rationalist question both in the sense of being philosophical, and in the sense that answering it is probably not very useful for actually improving your work in most fields.
My guess is that if they spent the (fairly significant) time taken to learn and do rationalist things on just learning more maths, they’d do better.
I would take a bet against that, and do think that studying top mathematicians roughly confirms that. My model is that many of the top mathematicians have very explicitly invested significant resources into metacognitive skills, and reflected a lot on the epistemology and methodology behind mathematical proofs.
The problem for resolving the bet would likely be what we define as “rationality” here, but I would say that someone who has written or thought explicitly for a significant fraction of their time about questions like “what kind of evidence is compelling to me?” and “what kind of cognitive strategies that I have tend to reliably cause me to make mistakes?” and “what concrete drills and practice exercises can I design to get better at deriving true conclusions from true premises?” would count as “studying rationality”.
All else equal, do you think a rationalist mathematician will become more successful in their field than a non-rationalist mathematician?
This post by Jacob Steinhardt seems relevant: it’s a sequence of models of research, and describes what good research strategies look like in them. He says, of the final model:
Before implementing this approach, I made little research progress for over a year; afterwards, I completed one project every four months on average. Other changes also contributed, but I expect the ideas here to at least double your productivity if you aren’t already employing a similar process.
All else equal, do you think a rationalist mathematician will become more successful in their field than a non-rationalist mathematician? My guess is that if they spent the (fairly significant) time taken to learn and do rationalist things on just learning more maths, they’d do better.
(Here I’m ignoring the possibility that learning rationality makes them decide to leave the field).
I’ll also wave at your wave at the recursion problem: “when is rationality useful” is a fundamentally rationalist question both in the sense of being philosophical, and in the sense that answering it is probably not very useful for actually improving your work in most fields.
I would take a bet against that, and do think that studying top mathematicians roughly confirms that. My model is that many of the top mathematicians have very explicitly invested significant resources into metacognitive skills, and reflected a lot on the epistemology and methodology behind mathematical proofs.
The problem for resolving the bet would likely be what we define as “rationality” here, but I would say that someone who has written or thought explicitly for a significant fraction of their time about questions like “what kind of evidence is compelling to me?” and “what kind of cognitive strategies that I have tend to reliably cause me to make mistakes?” and “what concrete drills and practice exercises can I design to get better at deriving true conclusions from true premises?” would count as “studying rationality”.
This post by Jacob Steinhardt seems relevant: it’s a sequence of models of research, and describes what good research strategies look like in them. He says, of the final model: