When we become more rational, it’s usually because we invent a new cognitive rule that:
Explains why certain beliefs and actions lead to winning in a set of previously observed situations that all share some property; and,
Leads to winning in some, if not all, heretofore unforeseen situations that also share this property.
When you learn the general rule of not-arguing-over-definitions, then, in hindsight, you understand in a very general sense why humans on a desert island will draw lines in the sand to communicate if necessary instead of, with futility, mutually drawing lines that are naively intended to communicate the fact that they are dissatisfied with their respective companions’ line-drawing methods. You will foresee future instances of the general failure mode as well.
When we become more rational, it’s usually because we invent a new cognitive rule that:
Explains why certain beliefs and actions lead to winning in a set of previously observed situations that all share some property; and,
Leads to winning in some, if not all, unforeseen situations that also share this property.
When you learn the rule of not-arguing-over-definitions. In hindsight, you understand in a sense—why humans on a desert island will draw lines in the sand to communicate if necessary. Instead of; with futility, mutually drawing lines that are intended to communicate the fact that they are dissatisfied with their respective companions’ line-drawing methods. You will expect future instances of the failure mode as well.
You might say that one possible problem statement of solving human rationality is obtaining a complete understanding of the algorithm implicit in the physical structure of our brains that allows us to generate such new and improved rules.
Because there is some algorithm. Your new cognitive rules are output, and the question is what algorithm generates them. If you explicitly understood that algorithm, then all other insights about human rationality would simply fall out of it as consequences.
You might say that one problem of solving human rationality is obtaining an understanding of the algorithm in the physical structure of our brains that allows us to generate new rules.
There is some algorithm. Your new cognitive rules are output, and the question is: “What algorithm generates them?” If you explicitly understood that algorithm, then all other insights about human rationality would simply fall out of it as consequences.
Trying to modify to make this more readable. small changes; but your original is quite hard to parse.
meta: not sure why I find this hard to read but your past post(s) fine. If something changed in this iteration I would encourage you to go back to the more readable method. Keep up the writing!
You might say that one problem of solving human rationality is obtaining an understanding of the algorithm in the physical structure of our brains that allows us to generate new rules.
Well, that’s terrifying. I didn’t mean that that’s a subproblem, I meant that that’s one possible way of stating most, if not all, of the problem. Thank you for speaking up.
When we become more rational, it’s usually because we invent a new cognitive rule that:
Explains why certain beliefs and actions lead to winning in a set of previously observed situations that all share some property; and,
Leads to winning in some, if not all, unforeseen situations that also share this property.
When you learn the rule of not-arguing-over-definitions. In hindsight, you understand in a sense—why humans on a desert island will draw lines in the sand to communicate if necessary. Instead of; with futility, mutually drawing lines that are intended to communicate the fact that they are dissatisfied with their respective companions’ line-drawing methods. You will expect future instances of the failure mode as well.
You might say that one problem of solving human rationality is obtaining an understanding of the algorithm in the physical structure of our brains that allows us to generate new rules.
There is some algorithm. Your new cognitive rules are output, and the question is: “What algorithm generates them?” If you explicitly understood that algorithm, then all other insights about human rationality would simply fall out of it as consequences.
Trying to modify to make this more readable. small changes; but your original is quite hard to parse.
meta: not sure why I find this hard to read but your past post(s) fine. If something changed in this iteration I would encourage you to go back to the more readable method. Keep up the writing!
Well, that’s terrifying. I didn’t mean that that’s a subproblem, I meant that that’s one possible way of stating most, if not all, of the problem. Thank you for speaking up.