One hypothesis for the feeling around rationality improvement is that rationality does not have well defined core frameworks to serve as guideposts. Improvements therefore feel like grab bags of domain specific tricks rather than something with a clearly defined corpus, feedback loops/measures of progress, and most importantly a sense that everyone who works on the thing is pushing meaningfully in the same direction. The Rationality Quotient begins to sake steps in this direction, and obviously CFAR wants to be a testbed to try to develop such a thing if possible.
I think this is a result of focusing on the wrong level of abstraction. Specifically, material in this domain winds up looking like ‘share things that look like checklists of best practices.’ Which is great, but not the thing. The thing is more like figure out which knobs exist, can be turned, and what to turn them to to become a person who can generate checklists of best practices on the fly.
The turn towards things like focusing and TAPs have been huge steps in the correct direction AFAICT. The thing that is missing is what I will label as a sense of collaboration. It could be that much of the material to be explored is better explored in high bandwidth interactions rather than text and that is causing some of the problem.
Yes, using best practices is (in some situations) a rational decision, but it is not rationality.
It is also rational to have some division of labor: people who produce the checklists, and people who use them. Because developing good checklists requires time and resources.
Rationality itself is more like the art of creating such checklists, or evaluating the existing ones.
The way to winning includes both using and creating checklists. (I guess the optimal ratio depends on the quality of the existing checklists, on resources we can spend on our own research, on how the environment allows us to acquire more resources if we increase our skills, etc.)
One hypothesis for the feeling around rationality improvement is that rationality does not have well defined core frameworks to serve as guideposts. Improvements therefore feel like grab bags of domain specific tricks rather than something with a clearly defined corpus, feedback loops/measures of progress, and most importantly a sense that everyone who works on the thing is pushing meaningfully in the same direction. The Rationality Quotient begins to sake steps in this direction, and obviously CFAR wants to be a testbed to try to develop such a thing if possible.
I think this is a result of focusing on the wrong level of abstraction. Specifically, material in this domain winds up looking like ‘share things that look like checklists of best practices.’ Which is great, but not the thing. The thing is more like figure out which knobs exist, can be turned, and what to turn them to to become a person who can generate checklists of best practices on the fly.
The turn towards things like focusing and TAPs have been huge steps in the correct direction AFAICT. The thing that is missing is what I will label as a sense of collaboration. It could be that much of the material to be explored is better explored in high bandwidth interactions rather than text and that is causing some of the problem.
Yes, using best practices is (in some situations) a rational decision, but it is not rationality.
It is also rational to have some division of labor: people who produce the checklists, and people who use them. Because developing good checklists requires time and resources.
Rationality itself is more like the art of creating such checklists, or evaluating the existing ones.
The way to winning includes both using and creating checklists. (I guess the optimal ratio depends on the quality of the existing checklists, on resources we can spend on our own research, on how the environment allows us to acquire more resources if we increase our skills, etc.)