I agree that these virtue ethics may help some people with their instrumental rationality. In general I have noticed a trend at lesswrong in which popular modes of thinking are first shunned as being irrational and not based on truth, only to be readopted later as being more functional for achieving one’s stated goals. I think this process is important, because it allows one to rationally evaluate which ‘irrational’ models lead to the best outcome.
There’s that catchy saying: “evolution is smarter than you are”. I think it probably also extends somewhat to cultural evolution. Given that our behaviour is strongly influenced by these, I think we should expect to ‘rediscover’ much of our own biases and intuitions as useful heuristics for increasing instrumental rationality under some fairly familiar-looking utility function.
Given that our behaviour is strongly influenced by these, I think we should expect to ‘rediscover’ much of our own biases and intuitions as useful heuristics for increasing instrumental rationality under some fairly familiar-looking utility function.
Sadly, there’s good reason to think that many of these familiar heuristics and biases were very good for acting optimally in tribes on the savanna during a particular period of time, and it’s likely that they’ll lead us into more trouble the further we go from that environment.
You are right. I was wrong, or at least far too sloppy. I agree that we should not presume that any given mismatch between our rational evaluation and a more ‘folksy’ one can be attributed to a problem in our map. Rationality is interesting precisely because it does better than my intuition in situations that my ancestors didn’t often encounter.
But the point I’m trying and so far failing to get at is that for the purposes of instrumental rationality, we are equipped with some interesting information-processing gear. Certainly, letting it run amok won’t benefit me, but rationally exploiting my intuitions where appropriate is kind-of a cool mind-hack. Will_Newsome’s post, as I understood it, does a good job of making this point. He says “Moral philosophy was designed for humans, not for rational agents.” and that we should exploit that where appropriate.
The post resonated with my view how I try to do science, for example. I adopt a very naive form of scientific realism when I’m learning new scientific theories. I take the observations and proposed explanatory models to be objective truths, picturing them in my mind’s eye. There’s something about that which is just psychologically easier. The skepticism and clearer epistemological thinking can be switched on later, once I’ve got my head wrapped around the idea.
Hm, you know what? I think I might’ve gotten that Novalis quote just from browsing Wikiquotes. Although it certainly does seem like something I would’ve picked up from the quote threads.
I agree that these virtue ethics may help some people with their instrumental rationality. In general I have noticed a trend at lesswrong in which popular modes of thinking are first shunned as being irrational and not based on truth, only to be readopted later as being more functional for achieving one’s stated goals. I think this process is important, because it allows one to rationally evaluate which ‘irrational’ models lead to the best outcome.
This also fits my (non-LW) experience very well.
There’s that catchy saying: “evolution is smarter than you are”. I think it probably also extends somewhat to cultural evolution. Given that our behaviour is strongly influenced by these, I think we should expect to ‘rediscover’ much of our own biases and intuitions as useful heuristics for increasing instrumental rationality under some fairly familiar-looking utility function.
Sadly, there’s good reason to think that many of these familiar heuristics and biases were very good for acting optimally in tribes on the savanna during a particular period of time, and it’s likely that they’ll lead us into more trouble the further we go from that environment.
You are right. I was wrong, or at least far too sloppy. I agree that we should not presume that any given mismatch between our rational evaluation and a more ‘folksy’ one can be attributed to a problem in our map. Rationality is interesting precisely because it does better than my intuition in situations that my ancestors didn’t often encounter.
But the point I’m trying and so far failing to get at is that for the purposes of instrumental rationality, we are equipped with some interesting information-processing gear. Certainly, letting it run amok won’t benefit me, but rationally exploiting my intuitions where appropriate is kind-of a cool mind-hack. Will_Newsome’s post, as I understood it, does a good job of making this point. He says “Moral philosophy was designed for humans, not for rational agents.” and that we should exploit that where appropriate.
The post resonated with my view how I try to do science, for example. I adopt a very naive form of scientific realism when I’m learning new scientific theories. I take the observations and proposed explanatory models to be objective truths, picturing them in my mind’s eye. There’s something about that which is just psychologically easier. The skepticism and clearer epistemological thinking can be switched on later, once I’ve got my head wrapped around the idea.
As one of the rationalist quote threads said,
Which one? I can’t find it, now.
Hm, you know what? I think I might’ve gotten that Novalis quote just from browsing Wikiquotes. Although it certainly does seem like something I would’ve picked up from the quote threads.