Separation of concerns is a principle in computer science which says that distinct concerns should be addressed by distinct subsystems, so that you can optimize for them separately.
Since separation of concerns is obviously only applicable to bounded agents, it seems like someone who is clearly optimizing for epistemic rationality as a separate concern is vulnerable to being perceived as lacking the ability to optimize for instrumental rationality in an end-to-end way, and perhaps vulnerable to marketing that promises to teach how to optimize for instrumental rationality in an end-to-end way. Also most people are already running evolved strategies that don’t (implicitly) value epistemic rationality very highly as a separate concern, so it seems like you need to say something about why they should give up such strategies (which evolution has presumably poured huge amounts of compute into optimizing for), not just point to the principle in computer science (where humans are forced into separation of concerns due to lack of sufficient computing power).
It seems like there are two potential arguments one could make in this regard:
Our environment has changed and the strategies that evolution came up with aren’t very good anymore.
Evolution only optimizes for individual rationality, and what we care about or need is group rationality, and separation of concerns is a good way to achieve that in the absence of superhuman amounts of compute.
Evolution only optimizes for individual rationality, and what we care about or need is group rationality, and separation of concerns is a good way to achieve that in the absence of superhuman amounts of compute.
This seems very clearly true, such that it seems strange to use “evolution produces individuals who, in some societies, don’t seem to value separate epistemic concerns” as an argument.
Well-functioning societies have separation of concerns, such as into different professions (as Plato described in his Republic). Law necessarily involves separation of concerns, as well (the court is determining whether someone broke a law, not just directly what consequences they will suffer). Such systems are created sometimes, and they often degrade over time (often through being exploited), but can sometimes be repaired.
If you notice that most people’s strategies implicitly don’t value epistemic rationality as a separate concern, you can infer that there isn’t currently a functioning concern-separating social structure that people buy into, and in particular that they don’t believe that desirable rule of law has been achieved.
In an environment with very low epistemic-instrumental separation of concerns, an especially intelligent individual actually has an incentive to insulate their own epistemics (become good at lying and acting), so that they can think. Optimizing for true beliefs then becomes highly instrumentally valuable (modulo the cost of keeping up the barrier / the downside risk if discovered).
Also, still thinking about the environment where the overarching social structure doesn’t support much separation of concerns, there’s still a value to be had in associating with individuals who often naively speak the truth (not the liar/actor type), because there’s a lot of misaligned preferences floating around. Your incentives for bending the truth are different from another person’s, so you prefer to associate with others who don’t bend the truth much (especially if you internally separate concerns). So, separation of concerns seems like a convergent instrumental goal which will be bubbling under the surface even in a dysfunctional superstructure.
Of course, both effects are limited in their ability encourage truth, and it’s a little like arguing that cooperation in the prisoner’s dilemma is a convergent instrumental subgoal bubbling under the surface of a defective equilibrium.
Since separation of concerns is obviously only applicable to bounded agents, it seems like someone who is clearly optimizing for epistemic rationality as a separate concern is vulnerable to being perceived as lacking the ability to optimize for instrumental rationality in an end-to-end way
No one believes anyone else to be an unbounded agent, so how is the concern with being perceived as a bounded agent relevant? A bounded agent can achieve greater epistemology and instrumental action by separating concerns, and there isn’t a reachable upper bound.
I think a lot of words have been spent on this debate elsewhere, but all I feel like citing is biases against overcoming bias. The point it mentions about costs accruing mostly to you is related to the point you made about group rationality. The point about not knowing how to evaluate whether epistemic rationality is useful without developing epistemic rationality—while it is perhaps intended as little more than a cute retort, I take it fairly seriously; it seems to apply to specific examples I encounter.
My recent view on this is mostly: but if you actually look, doesn’t it seem really useful to be able to separate these concerns? Overwhelmingly so?
Both you and Jessica seem to have interpreted me as arguing against separation of concerns w.r.t. epistemic and instrumental rationality, which wasn’t really my intention. I’m actually highly in favor of separation of concerns in this regard, and was just reporting a train of thought that was triggered by your “Separation of concerns is a principle in computer science” statement.
I didn’t follow much of the earlier debates about instrumental vs epistemic rationality (in part because I think something like curiosity/truth/knowledge is part of my terminal values so I’d personally want epistemic rationality regardless) so apologies if I’m retreading familiar ground.
Yeah, I later realized that my comment was not really addressing what you were interested in.
I read you as questioning the argument “separation of concerns, therefore, separation of epistemic vs instrumental”—not questioning the conclusion, which is what I initially responded to.
I think separation-of-concerns just shouldn’t be viewed as an argument in itself (ie, identifying some concerns which you can make a distinction between does not mean you should separate them). That conclusion rests on many other considerations.
Part of my thinking in writing the post was that humans have a relatively high degree of separation between epistemic and instrumental even without special scientific/rationalist memes. So, you can observe the phenomenon, take it as an example of separation-of-concerns, and think about why that may happen without thinking about abandoning evolved strategies.
Sort of like the question “why would an evolved species invent mathematics?”—why would an evolved species have a concept of truth? (But, I’m somewhat conflating ‘having a concept of truth’ and ‘having beliefs at all, which an outside observer might meaningfully apply a concept of truth to’.)
Since separation of concerns is obviously only applicable to bounded agents, it seems like someone who is clearly optimizing for epistemic rationality as a separate concern is vulnerable to being perceived as lacking the ability to optimize for instrumental rationality in an end-to-end way, and perhaps vulnerable to marketing that promises to teach how to optimize for instrumental rationality in an end-to-end way. Also most people are already running evolved strategies that don’t (implicitly) value epistemic rationality very highly as a separate concern, so it seems like you need to say something about why they should give up such strategies (which evolution has presumably poured huge amounts of compute into optimizing for), not just point to the principle in computer science (where humans are forced into separation of concerns due to lack of sufficient computing power).
It seems like there are two potential arguments one could make in this regard:
Our environment has changed and the strategies that evolution came up with aren’t very good anymore.
Evolution only optimizes for individual rationality, and what we care about or need is group rationality, and separation of concerns is a good way to achieve that in the absence of superhuman amounts of compute.
This seems very clearly true, such that it seems strange to use “evolution produces individuals who, in some societies, don’t seem to value separate epistemic concerns” as an argument.
Well-functioning societies have separation of concerns, such as into different professions (as Plato described in his Republic). Law necessarily involves separation of concerns, as well (the court is determining whether someone broke a law, not just directly what consequences they will suffer). Such systems are created sometimes, and they often degrade over time (often through being exploited), but can sometimes be repaired.
If you notice that most people’s strategies implicitly don’t value epistemic rationality as a separate concern, you can infer that there isn’t currently a functioning concern-separating social structure that people buy into, and in particular that they don’t believe that desirable rule of law has been achieved.
In an environment with very low epistemic-instrumental separation of concerns, an especially intelligent individual actually has an incentive to insulate their own epistemics (become good at lying and acting), so that they can think. Optimizing for true beliefs then becomes highly instrumentally valuable (modulo the cost of keeping up the barrier / the downside risk if discovered).
Also, still thinking about the environment where the overarching social structure doesn’t support much separation of concerns, there’s still a value to be had in associating with individuals who often naively speak the truth (not the liar/actor type), because there’s a lot of misaligned preferences floating around. Your incentives for bending the truth are different from another person’s, so you prefer to associate with others who don’t bend the truth much (especially if you internally separate concerns). So, separation of concerns seems like a convergent instrumental goal which will be bubbling under the surface even in a dysfunctional superstructure.
Of course, both effects are limited in their ability encourage truth, and it’s a little like arguing that cooperation in the prisoner’s dilemma is a convergent instrumental subgoal bubbling under the surface of a defective equilibrium.
No one believes anyone else to be an unbounded agent, so how is the concern with being perceived as a bounded agent relevant? A bounded agent can achieve greater epistemology and instrumental action by separating concerns, and there isn’t a reachable upper bound.
I think a lot of words have been spent on this debate elsewhere, but all I feel like citing is biases against overcoming bias. The point it mentions about costs accruing mostly to you is related to the point you made about group rationality. The point about not knowing how to evaluate whether epistemic rationality is useful without developing epistemic rationality—while it is perhaps intended as little more than a cute retort, I take it fairly seriously; it seems to apply to specific examples I encounter.
My recent view on this is mostly: but if you actually look, doesn’t it seem really useful to be able to separate these concerns? Overwhelmingly so?
Both you and Jessica seem to have interpreted me as arguing against separation of concerns w.r.t. epistemic and instrumental rationality, which wasn’t really my intention. I’m actually highly in favor of separation of concerns in this regard, and was just reporting a train of thought that was triggered by your “Separation of concerns is a principle in computer science” statement.
I didn’t follow much of the earlier debates about instrumental vs epistemic rationality (in part because I think something like curiosity/truth/knowledge is part of my terminal values so I’d personally want epistemic rationality regardless) so apologies if I’m retreading familiar ground.
Yeah, I later realized that my comment was not really addressing what you were interested in.
I read you as questioning the argument “separation of concerns, therefore, separation of epistemic vs instrumental”—not questioning the conclusion, which is what I initially responded to.
I think separation-of-concerns just shouldn’t be viewed as an argument in itself (ie, identifying some concerns which you can make a distinction between does not mean you should separate them). That conclusion rests on many other considerations.
Part of my thinking in writing the post was that humans have a relatively high degree of separation between epistemic and instrumental even without special scientific/rationalist memes. So, you can observe the phenomenon, take it as an example of separation-of-concerns, and think about why that may happen without thinking about abandoning evolved strategies.
Sort of like the question “why would an evolved species invent mathematics?”—why would an evolved species have a concept of truth? (But, I’m somewhat conflating ‘having a concept of truth’ and ‘having beliefs at all, which an outside observer might meaningfully apply a concept of truth to’.)