Evolution only optimizes for individual rationality, and what we care about or need is group rationality, and separation of concerns is a good way to achieve that in the absence of superhuman amounts of compute.
This seems very clearly true, such that it seems strange to use “evolution produces individuals who, in some societies, don’t seem to value separate epistemic concerns” as an argument.
Well-functioning societies have separation of concerns, such as into different professions (as Plato described in his Republic). Law necessarily involves separation of concerns, as well (the court is determining whether someone broke a law, not just directly what consequences they will suffer). Such systems are created sometimes, and they often degrade over time (often through being exploited), but can sometimes be repaired.
If you notice that most people’s strategies implicitly don’t value epistemic rationality as a separate concern, you can infer that there isn’t currently a functioning concern-separating social structure that people buy into, and in particular that they don’t believe that desirable rule of law has been achieved.
In an environment with very low epistemic-instrumental separation of concerns, an especially intelligent individual actually has an incentive to insulate their own epistemics (become good at lying and acting), so that they can think. Optimizing for true beliefs then becomes highly instrumentally valuable (modulo the cost of keeping up the barrier / the downside risk if discovered).
Also, still thinking about the environment where the overarching social structure doesn’t support much separation of concerns, there’s still a value to be had in associating with individuals who often naively speak the truth (not the liar/actor type), because there’s a lot of misaligned preferences floating around. Your incentives for bending the truth are different from another person’s, so you prefer to associate with others who don’t bend the truth much (especially if you internally separate concerns). So, separation of concerns seems like a convergent instrumental goal which will be bubbling under the surface even in a dysfunctional superstructure.
Of course, both effects are limited in their ability encourage truth, and it’s a little like arguing that cooperation in the prisoner’s dilemma is a convergent instrumental subgoal bubbling under the surface of a defective equilibrium.
This seems very clearly true, such that it seems strange to use “evolution produces individuals who, in some societies, don’t seem to value separate epistemic concerns” as an argument.
Well-functioning societies have separation of concerns, such as into different professions (as Plato described in his Republic). Law necessarily involves separation of concerns, as well (the court is determining whether someone broke a law, not just directly what consequences they will suffer). Such systems are created sometimes, and they often degrade over time (often through being exploited), but can sometimes be repaired.
If you notice that most people’s strategies implicitly don’t value epistemic rationality as a separate concern, you can infer that there isn’t currently a functioning concern-separating social structure that people buy into, and in particular that they don’t believe that desirable rule of law has been achieved.
In an environment with very low epistemic-instrumental separation of concerns, an especially intelligent individual actually has an incentive to insulate their own epistemics (become good at lying and acting), so that they can think. Optimizing for true beliefs then becomes highly instrumentally valuable (modulo the cost of keeping up the barrier / the downside risk if discovered).
Also, still thinking about the environment where the overarching social structure doesn’t support much separation of concerns, there’s still a value to be had in associating with individuals who often naively speak the truth (not the liar/actor type), because there’s a lot of misaligned preferences floating around. Your incentives for bending the truth are different from another person’s, so you prefer to associate with others who don’t bend the truth much (especially if you internally separate concerns). So, separation of concerns seems like a convergent instrumental goal which will be bubbling under the surface even in a dysfunctional superstructure.
Of course, both effects are limited in their ability encourage truth, and it’s a little like arguing that cooperation in the prisoner’s dilemma is a convergent instrumental subgoal bubbling under the surface of a defective equilibrium.