Despite all the years we’ve talked about practical rationality on this site and others, it still seems like we’re a community the loves chess, talks about chess constantly, reads dense theoretical books about chess (which usually contain few tactical examples), and then individually we each play roughly one chess game per month and don’t pay particular attention to how we performed in the game. This is how you end up thinking you know a lot about chess without actually improving at chess, and in fact losing to street hustlers who play chess all day but never read about its theoretical aspects.
In fact, we don’t even have the most basic framework for determining who is doing better or who is doing worse at rationality, beyond “do they seem happy? does it seem like they’re achieving their values?” There can be no gaming, or ranking, or hierarchy without such a framework. You can’t be a black belt in rationality unless you have some way of showing that you’re better at rationality than a white belt, and white belts have no reason to listen to black belts for the same reasons.
I have tried countless times to make “games” or “tools” to structure various aspects of my own decisionmaking, to bootstrap my rationality, or to outright train my abilities. These projects usually either fail or end up being so tailor-made for the problem I’m facing that I never touch them again.
Some of the tools that I’m aware of, which have proven consistently useful in one sense or another, include the following:
Anki
PredictionBook
Beeminder
Google Sheets, for tracking various things that don’t fit into Beeminder
Also, practical solutions that implement something like GTD are useful:
Nozbe or other GTD apps
Evernote for quick capture and sorting of information and notes
One hypothesis for the feeling around rationality improvement is that rationality does not have well defined core frameworks to serve as guideposts. Improvements therefore feel like grab bags of domain specific tricks rather than something with a clearly defined corpus, feedback loops/measures of progress, and most importantly a sense that everyone who works on the thing is pushing meaningfully in the same direction. The Rationality Quotient begins to sake steps in this direction, and obviously CFAR wants to be a testbed to try to develop such a thing if possible.
I think this is a result of focusing on the wrong level of abstraction. Specifically, material in this domain winds up looking like ‘share things that look like checklists of best practices.’ Which is great, but not the thing. The thing is more like figure out which knobs exist, can be turned, and what to turn them to to become a person who can generate checklists of best practices on the fly.
The turn towards things like focusing and TAPs have been huge steps in the correct direction AFAICT. The thing that is missing is what I will label as a sense of collaboration. It could be that much of the material to be explored is better explored in high bandwidth interactions rather than text and that is causing some of the problem.
Yes, using best practices is (in some situations) a rational decision, but it is not rationality.
It is also rational to have some division of labor: people who produce the checklists, and people who use them. Because developing good checklists requires time and resources.
Rationality itself is more like the art of creating such checklists, or evaluating the existing ones.
The way to winning includes both using and creating checklists. (I guess the optimal ratio depends on the quality of the existing checklists, on resources we can spend on our own research, on how the environment allows us to acquire more resources if we increase our skills, etc.)
Despite all the years we’ve talked about practical rationality on this site and others, it still seems like we’re a community the loves chess, talks about chess constantly, reads dense theoretical books about chess (which usually contain few tactical examples), and then individually we each play roughly one chess game per month and don’t pay particular attention to how we performed in the game. This is how you end up thinking you know a lot about chess without actually improving at chess, and in fact losing to street hustlers who play chess all day but never read about its theoretical aspects.
In fact, we don’t even have the most basic framework for determining who is doing better or who is doing worse at rationality, beyond “do they seem happy? does it seem like they’re achieving their values?” There can be no gaming, or ranking, or hierarchy without such a framework. You can’t be a black belt in rationality unless you have some way of showing that you’re better at rationality than a white belt, and white belts have no reason to listen to black belts for the same reasons.
I have tried countless times to make “games” or “tools” to structure various aspects of my own decisionmaking, to bootstrap my rationality, or to outright train my abilities. These projects usually either fail or end up being so tailor-made for the problem I’m facing that I never touch them again.
Some of the tools that I’m aware of, which have proven consistently useful in one sense or another, include the following:
Also, practical solutions that implement something like GTD are useful:
One hypothesis for the feeling around rationality improvement is that rationality does not have well defined core frameworks to serve as guideposts. Improvements therefore feel like grab bags of domain specific tricks rather than something with a clearly defined corpus, feedback loops/measures of progress, and most importantly a sense that everyone who works on the thing is pushing meaningfully in the same direction. The Rationality Quotient begins to sake steps in this direction, and obviously CFAR wants to be a testbed to try to develop such a thing if possible.
I think this is a result of focusing on the wrong level of abstraction. Specifically, material in this domain winds up looking like ‘share things that look like checklists of best practices.’ Which is great, but not the thing. The thing is more like figure out which knobs exist, can be turned, and what to turn them to to become a person who can generate checklists of best practices on the fly.
The turn towards things like focusing and TAPs have been huge steps in the correct direction AFAICT. The thing that is missing is what I will label as a sense of collaboration. It could be that much of the material to be explored is better explored in high bandwidth interactions rather than text and that is causing some of the problem.
Yes, using best practices is (in some situations) a rational decision, but it is not rationality.
It is also rational to have some division of labor: people who produce the checklists, and people who use them. Because developing good checklists requires time and resources.
Rationality itself is more like the art of creating such checklists, or evaluating the existing ones.
The way to winning includes both using and creating checklists. (I guess the optimal ratio depends on the quality of the existing checklists, on resources we can spend on our own research, on how the environment allows us to acquire more resources if we increase our skills, etc.)
deleted