Thank you for more bits of information that answer my original question in this thread. You have my virtual upvote :)
After reading a bit more about meta-rationality and observing how my perspective changes when I try to think this way, I’ve come to an opinion that the “disagreement on priorities”, as I have originally called it, is more significant than I originally acknowledged.
To give an example, if one adopts the science-based map (SBM) as the foundation of their thinking for most practical purposes and only checks the other maps when the SBM doesn’t work (or when modelling other people), they will see the world differently from a person who routinely tries to adopt multiple different perspectives when exploring every problem they face. Even though technically their world views are the same, the different priorities (given that both have bounded computational resources) will lead them to exploring different parts of the solution space and potentially finding different insights. The differences can accumulate through updating in different directions, so, at least in theory, their world views can drift apart to a significant degree.
… the genesis of the meta-rationalist epistemology is that the map is part of the territory and is thus the map is constrained by the territory and not by an external desire for correspondence or anything else.
Again, even though I see this idea as being part (or a trivial consequence) of LW-rationality, focusing your attention on how your map is influenced by where you are in the territory gives new insights.
So my current take aways are: as rationalists that agree with meta-rationalists on (meta-)epistemological foundations we should consider updating our epistemological priorities in the direction that they are advocating; if we can figure out ways to formulate meta-rationalist ideas in a less inscrutable way with less nebulosity, we should do so—it will benefit everyone; we should look into what meta-rationalists have to say about creativity / hypothesis generation—perhaps it will help with formulating a general high level theory of creative thinking (and if we do it in a way that’s precise enough to be programmed into computers, that would be pretty significant).
Thank you for more bits of information that answer my original question in this thread. You have my virtual upvote :)
After reading a bit more about meta-rationality and observing how my perspective changes when I try to think this way, I’ve come to an opinion that the “disagreement on priorities”, as I have originally called it, is more significant than I originally acknowledged.
To give an example, if one adopts the science-based map (SBM) as the foundation of their thinking for most practical purposes and only checks the other maps when the SBM doesn’t work (or when modelling other people), they will see the world differently from a person who routinely tries to adopt multiple different perspectives when exploring every problem they face. Even though technically their world views are the same, the different priorities (given that both have bounded computational resources) will lead them to exploring different parts of the solution space and potentially finding different insights. The differences can accumulate through updating in different directions, so, at least in theory, their world views can drift apart to a significant degree.
Again, even though I see this idea as being part (or a trivial consequence) of LW-rationality, focusing your attention on how your map is influenced by where you are in the territory gives new insights.
So my current take aways are: as rationalists that agree with meta-rationalists on (meta-)epistemological foundations we should consider updating our epistemological priorities in the direction that they are advocating; if we can figure out ways to formulate meta-rationalist ideas in a less inscrutable way with less nebulosity, we should do so—it will benefit everyone; we should look into what meta-rationalists have to say about creativity / hypothesis generation—perhaps it will help with formulating a general high level theory of creative thinking (and if we do it in a way that’s precise enough to be programmed into computers, that would be pretty significant).