Separation of Concerns

Separation of concerns is a principle in computer science which says that distinct concerns should be addressed by distinct subsystems, so that you can optimize for them separately. We can also apply the idea in many other places, including human rationality. This idea has been written about before. I’m not trying to make a comprehensive post about it, just remark on some things I recently though about.

Epistemic vs Instrumental

The most obvious example is beliefs vs desires. Although the distinction may not be a perfect separation-of-concerns in practice (or even in principle), at least I can say this:

  • Even non-rationalists find it useful to make a relatively firm distinction between what is true and what they want to be true;

  • Rationalists, scientists, and intellectuals of many varieties tend to value an especially sharp distinction of this kind.

I’m particularly thinking about how the distinction is used in conversation. If an especially sharp distinction isn’t being made, you might see things like:

  • Alice makes a factual statement, but the statement has (intended or unintended) conversational implicature which is perceived as negative by most of the people present. Alice is chastised and concedes the point, withdrawing her assertion.

  • Bob mentions a negative consequence of a proposed law. Everyone listening perceives Bob to be arguing against the law.

Notice that this isn’t an easy distinction to make. It isn’t right at all to just ignore conversational implicature. You should not only make literal statements, nor should you just assume that everyone else is doing that. The skill is more like, raise the literal content of words as a hypothesis; make a distinction in your mind between what is said and anything else which may have been meant.

Side note—as with many conversation norms, the distinctions I’m mentioning in this post cannot be imposed on a conversation unilaterally. Sometimes simply pointing out a distinction works; but generally, one has to meet a conversation where it’s at, and only gently try to pull it to a better place. If you’re in a discussion which is strongly failing to make a true-vs-useful distinction, simply pointing out examples of the problem will very likely be taken as an attack, making the problem worse.

Making a distinction between epistemics and instrumentality seems like a kind of “universal solvent” for cognitive separation of concerns—the rest of the examples I’m going to mention feel like consequences of this one, to some extent. I think part of the reason for this is that “truth” is a concept which has a lot of separation-of-concerns built in: it’s not just that you consider truth separately from usefulness; you also consider the truth of each individual statement separately, which creates a scaffolding to support a huge variety of separation-of-concerns (any time you’re able to make an explicit distinction between different assertions).

But the distinction is also very broad. Actually, it’s kind of a mess—it feels a bit like “truth vs everything else”. Earlier, I tried to characterize it as “what’s true vs what you want to be true”, but taken literally, this only captures a narrow case of what I’m pointing at. There are many different goals which statements can optimize besides truth.

  • You could want to believe something because you want it to be true—perhaps you can’t stand thinking about the possibility of it being false.

  • You could want to claim something because it helps argue for/​against some side in a decision which you want to influence, or for/​against some other belief which you want to hold for some other reason.

  • You could want to believe something because the behaviors encouraged by the belief are good—perhaps you exercise more if you believe it will make you lose weight; perhaps everyone believing in karma, or heaven and hell, makes for a stronger and more cooperative community.

Simply put, there are a wide variety of incentives on beliefs and claims. There wouldn’t even be a concept of ‘belief’ or ‘claim’ if we didn’t separate out the idea of truth from all the other reasons one might believe/​claim something, and optimize for it separately. Yet, it is kind of fascinating that we do this even to the degree that we do—how do we successfully identify the ‘truth’ concern in the first place, and sort it out from all the other incentives on our beliefs?

Argument vs Premises and Conclusion

Another important distinction is to separate the evaluation of hypothetical if-then statements from any concern with the truth of their premises or conclusions. A common complaint among the more logic-minded, of the less, is that hardly anyone is capable of properly distinguishing the claim “If X, then Y” from the claim “X, and also Y”.

It could be that a lack of a very sharp truth-vs-implicature distinction is what blocks people from making an if-vs-and distinction. Why would you be claiming “If X, then Y” if not to then say “by the way, X; so, Y”? (There are actually lots of reasons, but, they’re all much less common than making an argument because you believe the premises and want to argue the conclusion—so, that’s the commonly understood implicature.)

However, it’s also possible to successfully make the “truth” distinction but not the “hypothetical” distinction. Hypothetical reasoning is a tricky skill. Even if you successfully make the distinction when it is pointed out explicitly, I’d guess that there are times when you fail to make it in conversation or private thought.

Preferences vs Bids

The main reason I’m writing this post is actually because this distinction hit me recently. You can say that you want something, or say how you feel about something, without it being a bid for someone to do something about it. This is both close to the overall topic of In My Culture and a specific example (like, listed as an example in the post).

Actually, let’s split this up into cases:

Preferences about social norms vs bids for those social norms to be in place. This is more or less the point of the In My Culture article; saying “in my culture” before something to put a little distance between the conversation and the preferred norm, so that it is put on the table as an invitation rather than being perceived as a requirement.

Proposals vs preferences vs bids. Imagine a conversation about what restaurant to go to. Often, people run into a problem: no one has any preferences; everyone is fine with whatever. No one is willing to make any proposals. One reason why this might happen is that proposals, and preferences, are perceived as bids. No one wants to take the blame for a bad plan; no one wants to be seen as selfish or negligent of other’s preferences. So, there’s a natural inclination to lose touch with your preferences; you really feel like you don’t care, and like you can’t think of any options.

  • A proposal puts an option ‘on the table’ for consideration.

  • A preference is your own component of the group utility function. If you also think other people should have the same preference, you can state your reason for that, and let others update if they will.

  • A bid is a request for group action: you don’t just want tacos, you don’t even merely propose tacos; you call on the group to collectively get tacos.

If a strong distinction between preferences and bids is made, it gets easier to state what you prefer, trusting that the group will take it as only one data point of many to be taken together. If a distinction between proposals and bids is made, it will be easier to list whatever comes to mind, and to think of places you’d actually like to go.

Feelings vs bids. I think this one comes less naturally to people who make a strong truth distinction—there’s something about directing attention toward the literal truth of statements which directs attention away from how you feel about them, even though how you feel is something you can also try to have true beliefs about. So, in practice, people who make an especially strong truth distinction may nonetheless treat statements about feelings as if they were statements about the things the feelings are about, precisely because they’re hypersensitive to other people failing to make that distinction. So: know that you can say how you feel about something without it being anything more. Feeling angry about someone’s statement doesn’t have to be a bid for them to take it back, or a claim that it is false. Feeling sad doesn’t have to be a bid for attention. An emotion doesn’t even have to reflect your more considered preferences.

(To make this a reality, you probably have to explicitly flag that your emotions are not bids.)

When a group of people is skilled at making a truth distinction, certain kinds of conversation, and certain kinds of thinking, become much easier: all sorts of beliefs can be put out into the open where they otherwise couldn’t, allowing the collective knowledge to go much further. Similarly, when a group of people is skilled at the feelings distinction, I expect things can go places where they otherwise couldn’t. If you can mention in passing that something everyone else seems to like makes you sad, without it becoming a big deal. If there is sufficient trust that you can say how you are feeling about things, in detail, without expecting it to make everything complicated.

The main reason I wrote this post is that someone was talking about this kind of interaction, and I initially didn’t see it as very possible or necessarily desirable. After thinking about it more, the analogy to making a strong truth distinction hit me. Someone stuck in a culture without a strong truth distinction might similarly see such a distinction as ‘not possible or desirable’: the usefulness of an assertion is obviously more important than its truth; in reality, being overly obsessed with truth will both make you vulnerable (if you say true things naively) and ignorant (if you take statements at face value too much, ignoring connotation and implicature); even if it were possible to set aside those issues, what’s the use of saying a bunch of true stuff? Does it get things done? Similarly: the truth of the matter is more important than how you feel about it; in reality, stating your true feelings all the time will make you vulnerable and perceived as needy or emotional; even if you could set those things aside, what’s the point of talking about feelings all the time?

Now it seems both are possible and simply good, for roughly the same reason. Having the ability to make distinctions doesn’t require you to explicitly point out those distinctions in every circumstance; rather, it opens up more possibilities.

I can’t say a whole lot about the benefits of a feelings-fluent culture, because I haven’t really experienced it. This kind of thing is part of what circling seems to be about, in my mind. I think the rationalist community as I’ve experienced it goes somewhat in that direction, but definitely not all the way.