I think I can, let me know if this explanation makes sense. (If not then this is probably also the reason I didn’t understand your clarification. Also this ended up pretty long so I probably underexpressed originally, sorry about that.)
What I mean here is that we shouldn’t try to separate openness from vulnerability because openness can’t exist without vulnerability. What do we hope to gain from openness? I think there are basically three answers. We might be embarrassed by our interests, but know that we could benefit from those interests being known to certain people. We might want an outside perspective on a personal matter, because often we’re too close to ourselves to evaluate our situation or our actions reasonably. We might want to make a costly social display, signalling our emotional investment in a particular relationship by demonstrating parts of ourselves that we wouldn’t demonstrate to somebody we weren’t invested in. In practice we’re usually doing a combination of all three of these when we make displays of vulnerability.
Which of these can be done without some form of personal risk? We can probably do the first one: in unusually sex-positive communities, for example, people can often disclose their fetishes relatively easily and without much fear of backlash. As a more mundane example, a person might not want to talk about anime with their coworkers, but be excited to talk about it at an anime convention.
The other two I think require risk to be worthwhile. When we seek the counsel of others, not merely their expertise, we are risking the comforting belief that we understand something or are justified in our actions. In an exchange like this:
> I might even respond to “I have something vulnerable to say” with “Oh ok, I’m happy to listen, but also I’d suggest that we could first meditate together for just a bit on what is literally vulnerable about it, circumspectly, and see if we can decrease that aspect”,
if I were the other party, I think this response would make it difficult to access openness in the proceeding conversation. When I say “I have something vulnerable to say”, I might mean a few different things, but they’re almost all of the flavor “I want your perspective on a topic where I have trouble trusting my own perspective. It would be temporarily painful for me if your perspective were to differ much from my own, but I find you some combination of safe enough to talk to and insightful enough to be worth talking to that I would like you to give me your true perspective. From this I hope to (1) achieve a better understanding of my own circumstances, even at the cost of being upset for a while, (2) to show you that you’re important to me in a way that I’m not incentivized to fake, and (3) to show you that I am the sort of person who cares about more than just my own perspective. If your perspective does differ significantly from mine, I hope you will be careful but honest when you explain that to me.”
Maybe there is a sort of person who can always get (1) without asking for (2) and (3); that is, can have at least some of the goods without any of the bads! This sort of person would need to be unusually resistant to, perhaps immune to, embarrassment or judgment. They would be able to communicate lots of relevant facts about themselves, even those which other people might hesitate to communicate, and would be perfectly willing to accept a different point of view without even a hint of regret or attachment to their old perspective. But this person wouldn’t be able to make costly signals to demonstrate genuine social connection. I find it hard to imagine what emotional closeness even could look like for a person like this, and I struggle to describe the life I imagine they would lead as one involving anything I recognize as “openness”.
(Moreover, I’m not convinced that this is a way someone can be while still having any sort of personal identity whatsoever. Admittedly I haven’t known many people who tried to be this way, but the couple I have known were extraordinarily emotionally dangerous people, did quite a bit of social manipulation and when confronted seemed unable to understand what social manipulation even is, and ended up suffering mental breaks, although of course I can’t be completely sure of their mental states and can’t speak at all to causation.)
To sum up the most important points: I think deep social bonds (those built out of justified belief in mutual care) are inherently vulnerable. They don’t just coincide with vulnerability, they are made of it. My thoughts and my self-perceptions can cause me pain. If I try to ensure that another person cannot cause me pain, or can cause me as little pain as possible while still giving me whatever social goods I can get risklessly from them, then I’m almost by definition trying to keep them as separate from my thoughts and my self-image as I can, and this seems synonymous with trying not to care about them. There are some social goods that can be had risklessly in certain contexts, and it’s worthwhile to think about how often we want to be in those contexts and how much we value those goods, but the answers should probably be “occasionally” and “not much, relatively”. If we want to be more open and authentic around others and to get more of the social goods we derive from openness and authenticity, then focusing on the evasion of vulnerability is very nearly the worst possible approach.
Ok so for context, I definitely agree that in practice, in many cases it’s not feasible to completely derisk openness from vulnerability, and often this means that the risk is totally worth it.
That said, I’m saying that the vulnerability itself, the harm risk exposure, is bad. So for example this could recommend noticing the exposure and studying it and sometimes marginally decreasing it, even as you’re still taking it on, if you can’t get rid of it entirely without also jettisoning some other precious stuff.
we are risking the comforting belief that we understand something or are justified in our actions
In this example, I would in real life become genuinely curious as to why the belief is comforting, in what manner it is comforting, what would be potentially harmful about having the belief contradicted, and how to avoid that harm.
One example is cognitive miserliness: rethinking things takes energy, so by default we avoid it. This is not a mere flaw! It’s an important and useful cognitive instinct. If I’m going to override that instinct, I’d rather at least be aware that I’m doing that; and even better would be to check in with myself about whether I want to spend the energy to rethink things right now—sometimes I don’t! And sometimes I do, and after I genuinely check in with myself and get a “yes”, it feels less bad / harmful to get the belief update!
Another example is bucket errors. I don’t want to answer someone’s question if, by their psychology, I’m also accidentally answering “can I pursue my dreams of being a writer or not”! Unless of course tying my answer to the question to that other question actually makes sense. But often we tie things in way that doesn’t make sense, or at least, in a way where you could do better if you thought about it more and debucketed somewhat.
(2) to show you that you’re important to me in a way that I’m not incentivized to fake, and (3) to show you that I am the sort of person who cares about more than just my own perspective.
In my idioculture, these descriptions are ambiguous between intentions I would consider good and intentions I would consider bad. Roughly, I’d say it’s very important that the action is good / makes sense / is healthy / is wholesome on the concrete object level, without the signaling stuff, in order to be a good signal. Otherwise what you’re actually signaling is stuff like “I’ll do needless damage to myself for the sake of this relationship”, which, I don’t mean to just generally derogate that stance because it sounds like it might come from deep love / devotion / need / other important things, but also I would say that it’s not in fact healthy, and is not good for you or for the other person (on the assumption that the other person is a good person; if they are not a good person, they might enjoy or even specifically target that sort of self-damage).
But this person wouldn’t be able to make costly signals to demonstrate genuine social connection.
I think this is totally deeply incorrect. You can simply invest your efforts to help the other person in a healthy way. Another way is trusting / relying on the other person, including in exposure to risk of harm, when that exposure is required by the task. For example, rock climbing with ropes where you rely on your belayer. Or starting a company, raising a child, etc. There’s a ton of challenges that are worthwhile, and which often demand that we rely on each other including re/ risk of harm.
So for example this could recommend noticing the exposure and studying it and sometimes marginally decreasing it, even as you’re still taking it on, if you can’t get rid of it entirely without also jettisoning some other precious stuff.
I agree with this, I think our disagreement is mainly about how much we expect to decrease this exposure before we start jettisoning the precious stuff.
In this example, I would in real life become genuinely curious as to why the belief is comforting, in what manner it is comforting, what would be potentially harmful about having the belief contradicted, and how to avoid that harm.
I agree that this is a reasonable way to avoid some unnecessary risk, but the examples you give seem odd to me. Maintaining beliefs is often comforting because of positionality. A good reasoner should be highly willing to change their mind on anything given the right circumstances, but a good reasoner who found themselves constantly changing their mind and rarely anticipating it would start worrying about hypotheses like “I am fundamentally detached from reality and unable to reliably distinguish truth from fiction” and become quite distraught. I think this is the typical way for beliefs to be comforting and this applies to basically all beliefs, so I don’t think we can expect to avoid at least some amount of harm in most instances of vulnerability. (Of course this consideration is pretty small for most questions of fact! If my friend is wrong about which toppings are available at some pizza place, I don’t expect they would suffer much positional pain from being corrected. But of course most interactions don’t involve meaningful vulnerability from either party, which is why the small vulnerability that does exist is usually not acknowledged.)
In my idioculture, these descriptions are ambiguous between intentions I would consider good and intentions I would consider bad. Roughly, I’d say it’s very important that the action is good / makes sense / is healthy / is wholesome on the concrete object level, without the signaling stuff, in order to be a good signal.
I agree, this is why part (1) was important! Vulnerability can be used incorrectly, I’m not saying that we should pay no attention to the fact that openness induces risk. Indeed, it’s not that hard to describe types of people who consistently misuse vulnerability and cause harm. People can overshare, inappropriately disclosing information about themselves in the hope that their demonstration of vulnerability will produce a social connection, while not valuing the perspective of their counterparty enough to justify the exposure. People can also deceive (or self-deceive!), incorrectly signalling the pain they expect to experience if their perspective is challenged, either to provoke sympathy or demonstrate emotional strength. This just means that we should not be open with everyone, which is why the signal works at all.
I think this is totally deeply incorrect. You can simply invest your efforts to help the other person in a healthy way. Another way is trusting / relying on the other person, including in exposure to risk of harm, when that exposure is required by the task. For example, rock climbing with ropes where you rely on your belayer. Or starting a company, raising a child, etc.
Relying on each other is not the sort of social bond I have in mind here. Rock climbing or starting a business are excellent demonstrations of coincidence of interests or goals, and this produces some sort of social bond, but it’s not the same sort of social bond that vulnerability produces and is not a sufficient replacement, at least in my experience. Helping others and being helped in return, similarly, produces a social bond, but does not replace the need for vulnerability. These can be entryways, and indeed, most close friendships and relationships that I’m aware of began with a coincidence of interests and progressed to joint projects and mutual favors before expressions of vulnerability. But I’m not aware of any close friendships or healthy relationships (in my estimation of what “close” and “healthy” mean) that did not, at some point, involve unguarding, and as far as I can tell, this is where closeness actually begins. Raising a child together can probably produce this type of social bond, but if two (or more I guess) people consistently assess that they’re more scared of the other’s judgment than they are interested in the other’s potentially judgmental opinion on topics they care about, or if they’re only able to solicit the other’s opinion because the other person’s evaluation of them doesn’t feed into their sense of self enough that it could sting, I really really really don’t think those people should raise a child together.
(I guess I should mention the following: of course any starting point can work for any task. If you have enough foresight and are sufficiently good at weighing costs and benefits, you can start by trying to assess the appropriate amount of emotional risk and end up with a perfect policy. However, in this instance, under-risking is much worse than over-risking because it is self-insulating. In my experience, people who are too eager to demonstrate emotional vulnerability get lots of social feedback and settle to a more sustainable and healthy pace pretty quickly. Meanwhile those who are too timid can spend years and decades failing to find friendships that sustain them, and because they less often engage in the vulnerable practice of soliciting outside views from a person they care enough about to take seriously on matters of the self, they often don’t know that things can be different. We agree in principle that some amount of risk is justified but not all risk is justified, and because the evaluation of these quantities varies so much from situation-to-situation, I doubt we’ll be able to sketch out an example of explicit disagreement in enough detail to be fully sure that we disagree about the object-level best policy for the people in the example. The main reason I’m objecting this strongly is that I expect that, to a person who already under-risks, the framing and examples you provide will systematically recommend under-risking. An over-risker might apply the same framing and not end up with the same bias, but I think we should worry much less about how over-riskers will receive our advice on this topic, because over-riskers for the most part don’t need advice.)
Maintaining beliefs is often comforting because of positionality. A good reasoner should be highly willing to change their mind on anything given the right circumstances, but a good reasoner who found themselves constantly changing their mind and rarely anticipating it would start worrying about hypotheses like “I am fundamentally detached from reality and unable to reliably distinguish truth from fiction” and become quite distraught. I think this is the typical way for beliefs to be comforting and this applies to basically all beliefs, so I don’t think we can expect to avoid at least some amount of harm in most instances of vulnerability.
I’ve read this a couple times and am still not following, or maybe I disagree / don’t see where you’re coming from. Are you basically saying that for most people, for most (important?) beliefs, that person has some risk of being harmed by getting those beliefs really called into question, because if that happens too much then they would question whether they can tell what’s true in general?
I agree that there’s some effect like that, though I think for many people it’s pretty weak for many important beliefs, and there are other things that people are commonly referring to when talking about vulnerability. E.g. the things I mentioned, or feeling judged and thereby subtley threatened/pressured by social reality, or other things.
But I also am not sure whether you’re putting this forward as something we disagree about. I would still say that this type of fear is itself an interesting object of investigation and self-fortification over time. In other words, my point stands about the vulnerability as such being bad.
I think you have the gist, yes, and I think we disagree about the frequency and strength of this harm. If someone I know well told me that they had something vulnerable to share, I’d understand them as saying (modulo different auto-interpretations of mental state) that they’re much more exposed to this specific type of harm than normal in the conversation they expect to follow. Of course other, more solvable forms of vulnerability exist, but the people I’m close to basically know this and know me well enough to know that I also know this, so when they disclose vulnerability, marginal improvements are usually not available. I also think (though I can’t be sure) that this effect is actually quite strong for most people and for many of their beliefs.
I should note: there are contexts where I expect marginal improvements to be available! For example, as a teacher I often need to coordinate make-up exams or lectures with students, and this is often because the students are experiencing things that are difficult to share. When vulnerability is just an obstacle to disclosure, I think I agree with you fully. I don’t think this case is typical of vulnerability.
I guess the last point of disagreement is the claim that this is something most people should try to fortify against over time. More concretely, that most people I interact with should try to fortify against this over time, on the assumptions that you accurately believe that people in your social sphere don’t experience this type of harm strongly, that I accurately believe that people in my social sphere do experience it strongly, and that if you believe most people in your sphere should tone it down, you’d believe so even more strongly for people in my sphere.
For me, this type of fear is a load-bearing component in the preservation of my personal identity, and I suspect that things are similar for most people. I don’t think it’s a coincidence that the rationalist community has very high rates of psychosis and is the only community I’m aware of that treats unusual numbness to this sort of pain as an unalloyed and universal virtue! I think most people would agree that it’s good to be able to change your mind even when it’s painful, especially when it’s painful. But for most communities, the claim that it shouldn’t be painful to change your mind on a certain subject coincides with the claim that that subject shouldn’t be a core pillar of one’s identity. The claim that it shouldn’t be painful to change your mind on any subject, that the pain is basically a cognitive flaw, albeit understandable and forgivable and common, seems unique to this community.
(Also sorry for sentence structure here, I couldn’t figure out how to word this in a maximally-readable way for some reason. Thank you for reading me closely, I appreciate the effort.)
For me, this type of fear is a load-bearing component in the preservation of my personal identity, and I suspect that things are similar for most people. I don’t think it’s a coincidence that the rationalist community has very high rates of psychosis and is the only community I’m aware of that treats unusual numbness to this sort of pain as an unalloyed and universal virtue!
I don’t think numbness to pain is good, or that numbness or ignoring fear is good. The “fortification” I refer to is about trusting sense of pain / suffering / fear / anger / defensiveness / flight deeply, and being “on their side”, and then actually addressing what they’re pointing at. This is a very patient process: just because you think you understood what you were afraid of and then did something to alleviate the real danger there, doesn’t mean you won’t still be afraid, and if so you, you still don’t do the thing that you’re afraid of. Maybe the fear is seeing something else additional, that’s also real, to be afraid of; or maybe you just haven’t gotten comfortable with how you addressed the fear, like you want to see it proven out more, or something. Either way, it’s patient.
I think most people would agree that it’s good to be able to change your mind even when it’s painful, especially when it’s painful.
That’s the opposite of what I’m saying. I’m saying try to figure out why it’s painful—what is being damaged / hurt—and then try to protect that thing even more. Then I’m saying that sometimes, when you’ve done that, it doesn’t hurt to do the thing that previously did hurt, but there’s nothing unwholesome here; rather, you’ve healed an unnecessary wound / exposure.
The claim that it shouldn’t be painful to change your mind on any subject, that the pain is basically a cognitive flaw, albeit understandable and forgivable and common, seems unique to this community.
I don’t make that claim and haven’t written anything to that effect here.
I agree that you haven’t made that claim but I’m struggling to find an interpretation of what you’ve written that doesn’t imply it. In particular, in my model of your position, this is exactly the claim “vulnerability itself is bad (although it may accompany good things)” applied to the sort of vulnerability that is the risk of changing one’s identity-bearing beliefs. Maybe the following will help me pin down your position better:
That’s the opposite of what I’m saying. I’m saying try to figure out why it’s painful—what is being damaged / hurt—and then try to protect that thing even more. Then I’m saying that sometimes, when you’ve done that, it doesn’t hurt to do the thing that previously did hurt, but there’s nothing unwholesome here; rather, you’ve healed an unnecessary wound / exposure.
I agree that this is a plausible procedure and sometimes works, but how often do you expect this to work? Is it plausible to you that sometimes you figure out why it’s painful, but that knowledge doesn’t make it less painful, and yet the thing you’re afraid of doing is still the thing you’re supposed to do? Or does this not happen on your model of identity risk and vulnerability?
EDIT: I guess I should mention that I’m aware this is the opposite of what you’re saying, and my understanding is that this is very nearly the opposite of the statement you disclaim at the end here. We agree that people should be able to change their minds, and that sometimes the process of changing one’s mind seems painful. So either people should be able to change their minds despite the risk of pain, or people should be able to rearrange their mind until the process is not painful, and if it’s the latter, then an especially well-arranged mind would be able to do this quickly and would not anticipate pain in the first place. I’m not sure where you disagree with this chain of reasoning and I’m not sure I see where you can.
Is it plausible to you that sometimes you figure out why it’s painful, but that knowledge doesn’t make it less painful, and yet the thing you’re afraid of doing is still the thing you’re supposed to do?
In practice, yes, absolutely, all the time.
or people should be able to rearrange their mind until the process is not painful,
This is the part that is complicated and often infeasible.
I agree that this is a plausible procedure and sometimes works, but how often do you expect this to work?
Sometimes yes, sometimes no, it depends on the person and context; I would guess more than you seem to think, IDK.
It’s more that I want people to have two totally separate concepts for [vulnerability as such, i.e. exposure to harm] and [vulnerability, all that openness / unguardedness / working with tender areas / trust / reliance / doing hard things together stuff]. These things are related, as has been discussed, but separate conceptually and practically.
I agree that this is the crux but I don’t see how this is different from what we’ve been talking about? In particular, I’m trying to argue that these notions have a big intersection, and maybe even that the second kind is a subset of the first kind (there are types of openness and trust for which we can eliminate all the excess exposure to harm, but I think they’re qualitatively different from the best kinds of openness and trust; if you think the difference is not qualitative, or that it’s obviated when we consider exposure to harm correctly, then it wouldn’t be a subset.) As a concrete example, I’m trying to argue that the sort of interaction that involves honestly exposing a core belief to another person and asking for an outside perspective, with the goal of correcting that belief if it’s mistaken, is not just practically but necessarily in the intersection (it clearly requires openness and I’m trying to argue that it also requires exposure to harm for minds worth being.) Following that, I’m trying to argue that separating these concepts is a bad idea because, while this makes it easier to talk about the sorts of excess exposure we can and should eliminate, it makes it harder to recognize the exposure that we can’t or shouldn’t eliminate, and we lose more than we gain in this trade.
I’m trying to argue that it also requires exposure to harm for minds worth being
Ok. And, I take it, you’re not just saying “in practice, it’s often infeasible to separate the harm risk”. Yeah, my guess is we disagree about this, if by “requires” you mean “as a conceptually essential element of this kind of openness” (like how some amount of suffering might be an essential element of learning).
Actually, on second thought, even if it’s inseparable, I still want to make the original point about the conceptual difference. You still want to do separate accounting. The exposure to harm is still bad in and of itself. It’s still a cost, even if (hypothetically) you could never possibly avoid that cost. My guess is that there probably are exceptions, and the “building trust” thing is maybe sort of an exception, to “the exposure to harm is bad”—but only by having an additional separate good consequence of the exposure to harm. The exposure itself is bad! Like, I’d want to say “Ok, I’m going to do this thing, and it has some exposure to harm, and that part is bad, but the exposure has some subtle positive effects too, and also it is truly eternally inseparable from the goods, and it’s worth it overall, so I’m going to do it”. Why say all that if you’re just doing to do it anyway? Because you don’t want to get confused and think that the exposure itself is good. And I think in fact people do get confused this way, and it’s bad!
I think I have a better understanding of your position now! I’m still a bit confused by your use of the word “bad”, it seems like you’re using it to mean something other than “could meaningfully be made better”. Semantically, I don’t really know what you’re referring to when you say “the exposure itself”—the point here is that there is no such thing as the exposure itself! It is not always meaningful to split things up. There is a thing that I would call true openness and you might call something like necessary vulnerability (which you don’t necessarily need to believe exists), and that thing entails the potential for deeper social connection and the potential for emotional harm, but this just does not mean we can separate it into a connection part and a harm part. I think I’m back to my original objection basically: we should not always do goal factoring because our goals do not always factor. The point of factoring something is to break it into parts which can basically be optimized separately, with some consideration of cross-interactions, but when the cross-interactions dominate the factoring obscures the goal.
I’m also not convinced that people get confused this way? Maybe there is a way to define “bad” that makes this confusion even coherent, but I can’t think of such a way. The only way I can imagine a person endorsing the claim that the exposure itself is good is as a strong rejection of the premise that the thing that is actually good is separable from the exposure. Because, after all, if exposure under certain conditions (something like: exposure to a person I have good reason to trust, having thought about and addressed the ways it could be solvably bad, in pursuit of something I value more than I am afraid of the risk of potential pain) always corresponds with a good that is worth taking on that exposure, then every conceptually-possible version of that exposure is worth taking on net. What does it even mean to say that that category of exposure is bad if its every conceivable incarnation is net good? Maybe you can say that there’s no category difference between the sort of exposure that can be productively eliminated and the sort of exposure that can’t, but the fact that I can describe the difference between these categories seems to suggest otherwise. The only way I can see for this to fail is for the description I gave to be incoherent, which only seems possible if one of the categories is empty.
On the other hand I think many people are miscalibrated on this sort of calculation, such that they either take more or less emotional risk than they ideally would, and I explained earlier why I’m very worried about ways of thinking that tend toward underexposure and not so worried about ways that tend toward overexposure. I expect any sort of truly separate accounting to involve optimization on the risk side without consideration of the trust side, and because the effects on the trust side are subtle and harder to remember (in the sense that the sort of trust I care about is really really good in my experience, it’s the sort of thing that takes basically all of my cognition to fully experience, so when any part of my cognition is focused elsewhere I cannot accurately remember how good it is), this will tend to lose out to an unseparated approach.
(This part I have no credence to say, so feel free to dismiss it with as much prejudice as is justified, but this:
Ok, I’m going to do this thing, and it has some exposure to harm, and that part is bad, but the exposure has some subtle positive effects too, and also it is truly eternally inseparable from the goods, and it’s worth it overall, so I’m going to do it.
really does not seem like the sort of thought process that could properly calibrate a person’s exposure to emotional risk! My extremely strong suspicion is that a person whose thought process goes like this with any frequency, even if they end up accepting the risk often enough when they think it through, is extremely underexposed to emotional risk and does not know it because unlike overexposure, underexposure is self-reinforcing.)
EDIT: I think we’ve nailed down our disagreement about the object-level thing and we’re unlikely to come to agree on that, it seems like the remaining discussion is just about which distinctions are useful. Maybe this is the same disagreement and we’re unlikely to come to agree about this either? My preference is to talk about vulnerability by default, with the understanding vulnerability is a contingent part of certain social goods, but in some cases the vulnerability can be trimmed without infringing on the social goods, so I would talk about unnecessary or excessive vulnerability in those cases. My understanding of your preference is to talk about vulnerability by default, with the understanding that vulnerability is the (strictly bad) exposure to emotional pain that often accompanies some social interactions. But it’s at least plausible that vulnerability could be a contingent part of certain social goods, so in discussing those sorts of social goods at least as hypothetical objects, you’d refer to something like necessary vulnerability? And in cases where vulnerability could in theory be trimmed away by a sufficiently-refined self-model, but where that level of refinement is not easy to achieve and in practice the right thing to do is to proceed under the theoretically-resolvable uncertainty, something like worthwhile vulnerability? And then our disagreements in your language would be: I think that necessary vulnerability actually exists in theory, and that the set of necessary or worthwhile vulnerability is big enough that we shouldn’t separate it from primitive vulnerability, and you would take the opposite on both of those claims. Am I understanding correctly?
What does it even mean to say that that category of exposure is bad if its every conceivable incarnation is net good?
As an analogy, consider paying for stuff with money. (We could think about how it’s actually good that the other person gets money, because that way they can invest it more to make more stuff efficiently, which I agree with, but I’d bid to put that aside for the analogy.) From your selfish perspective, is that good or bad or what? Generally, you’d aim, and probably usually succeed, at paying for stuff when it’s net good. But that’s not how you do accounting, you still want to account for the part where you give up some money as a bad aspect of the total transaction.
I explained earlier why I’m very worried about ways of thinking that tend toward underexposure and not so worried about ways that tend toward overexposure.
I would want to point out that constructing complicated boundaries is difficult but is a worthwile task that allows you to avoid blunt force action in favor of more precise action. In this case, I’d be concerned about the fact that there’s this under/over-exposure tradeoff. To me, that says that we’re not identifying well the cases where the exposure is worthwhile or not.
My extremely strong suspicion is that a person whose thought process goes like this with any frequency, even if they end up accepting the risk often enough when they think it through, is extremely underexposed to emotional risk and does not know it because unlike overexposure, underexposure is self-reinforcing
Yeah, I think you’re wrong about this regarding many people, including me. I observe in myself and in other people that when we slow down and talk/think about why something is harm-exposing, they often figure out how to not be so harm-exposing while still being able to explore things more openly.
Maybe you could expand on this?
I think I can, let me know if this explanation makes sense. (If not then this is probably also the reason I didn’t understand your clarification. Also this ended up pretty long so I probably underexpressed originally, sorry about that.)
What I mean here is that we shouldn’t try to separate openness from vulnerability because openness can’t exist without vulnerability. What do we hope to gain from openness? I think there are basically three answers. We might be embarrassed by our interests, but know that we could benefit from those interests being known to certain people. We might want an outside perspective on a personal matter, because often we’re too close to ourselves to evaluate our situation or our actions reasonably. We might want to make a costly social display, signalling our emotional investment in a particular relationship by demonstrating parts of ourselves that we wouldn’t demonstrate to somebody we weren’t invested in. In practice we’re usually doing a combination of all three of these when we make displays of vulnerability.
Which of these can be done without some form of personal risk? We can probably do the first one: in unusually sex-positive communities, for example, people can often disclose their fetishes relatively easily and without much fear of backlash. As a more mundane example, a person might not want to talk about anime with their coworkers, but be excited to talk about it at an anime convention.
The other two I think require risk to be worthwhile. When we seek the counsel of others, not merely their expertise, we are risking the comforting belief that we understand something or are justified in our actions. In an exchange like this:
> I might even respond to “I have something vulnerable to say” with “Oh ok, I’m happy to listen, but also I’d suggest that we could first meditate together for just a bit on what is literally vulnerable about it, circumspectly, and see if we can decrease that aspect”,
if I were the other party, I think this response would make it difficult to access openness in the proceeding conversation. When I say “I have something vulnerable to say”, I might mean a few different things, but they’re almost all of the flavor “I want your perspective on a topic where I have trouble trusting my own perspective. It would be temporarily painful for me if your perspective were to differ much from my own, but I find you some combination of safe enough to talk to and insightful enough to be worth talking to that I would like you to give me your true perspective. From this I hope to (1) achieve a better understanding of my own circumstances, even at the cost of being upset for a while, (2) to show you that you’re important to me in a way that I’m not incentivized to fake, and (3) to show you that I am the sort of person who cares about more than just my own perspective. If your perspective does differ significantly from mine, I hope you will be careful but honest when you explain that to me.”
Maybe there is a sort of person who can always get (1) without asking for (2) and (3); that is, can have at least some of the goods without any of the bads! This sort of person would need to be unusually resistant to, perhaps immune to, embarrassment or judgment. They would be able to communicate lots of relevant facts about themselves, even those which other people might hesitate to communicate, and would be perfectly willing to accept a different point of view without even a hint of regret or attachment to their old perspective. But this person wouldn’t be able to make costly signals to demonstrate genuine social connection. I find it hard to imagine what emotional closeness even could look like for a person like this, and I struggle to describe the life I imagine they would lead as one involving anything I recognize as “openness”.
(Moreover, I’m not convinced that this is a way someone can be while still having any sort of personal identity whatsoever. Admittedly I haven’t known many people who tried to be this way, but the couple I have known were extraordinarily emotionally dangerous people, did quite a bit of social manipulation and when confronted seemed unable to understand what social manipulation even is, and ended up suffering mental breaks, although of course I can’t be completely sure of their mental states and can’t speak at all to causation.)
To sum up the most important points: I think deep social bonds (those built out of justified belief in mutual care) are inherently vulnerable. They don’t just coincide with vulnerability, they are made of it. My thoughts and my self-perceptions can cause me pain. If I try to ensure that another person cannot cause me pain, or can cause me as little pain as possible while still giving me whatever social goods I can get risklessly from them, then I’m almost by definition trying to keep them as separate from my thoughts and my self-image as I can, and this seems synonymous with trying not to care about them. There are some social goods that can be had risklessly in certain contexts, and it’s worthwhile to think about how often we want to be in those contexts and how much we value those goods, but the answers should probably be “occasionally” and “not much, relatively”. If we want to be more open and authentic around others and to get more of the social goods we derive from openness and authenticity, then focusing on the evasion of vulnerability is very nearly the worst possible approach.
Ok so for context, I definitely agree that in practice, in many cases it’s not feasible to completely derisk openness from vulnerability, and often this means that the risk is totally worth it.
That said, I’m saying that the vulnerability itself, the harm risk exposure, is bad. So for example this could recommend noticing the exposure and studying it and sometimes marginally decreasing it, even as you’re still taking it on, if you can’t get rid of it entirely without also jettisoning some other precious stuff.
In this example, I would in real life become genuinely curious as to why the belief is comforting, in what manner it is comforting, what would be potentially harmful about having the belief contradicted, and how to avoid that harm.
One example is cognitive miserliness: rethinking things takes energy, so by default we avoid it. This is not a mere flaw! It’s an important and useful cognitive instinct. If I’m going to override that instinct, I’d rather at least be aware that I’m doing that; and even better would be to check in with myself about whether I want to spend the energy to rethink things right now—sometimes I don’t! And sometimes I do, and after I genuinely check in with myself and get a “yes”, it feels less bad / harmful to get the belief update!
Another example is bucket errors. I don’t want to answer someone’s question if, by their psychology, I’m also accidentally answering “can I pursue my dreams of being a writer or not”! Unless of course tying my answer to the question to that other question actually makes sense. But often we tie things in way that doesn’t make sense, or at least, in a way where you could do better if you thought about it more and debucketed somewhat.
In my idioculture, these descriptions are ambiguous between intentions I would consider good and intentions I would consider bad. Roughly, I’d say it’s very important that the action is good / makes sense / is healthy / is wholesome on the concrete object level, without the signaling stuff, in order to be a good signal. Otherwise what you’re actually signaling is stuff like “I’ll do needless damage to myself for the sake of this relationship”, which, I don’t mean to just generally derogate that stance because it sounds like it might come from deep love / devotion / need / other important things, but also I would say that it’s not in fact healthy, and is not good for you or for the other person (on the assumption that the other person is a good person; if they are not a good person, they might enjoy or even specifically target that sort of self-damage).
I think this is totally deeply incorrect. You can simply invest your efforts to help the other person in a healthy way. Another way is trusting / relying on the other person, including in exposure to risk of harm, when that exposure is required by the task. For example, rock climbing with ropes where you rely on your belayer. Or starting a company, raising a child, etc. There’s a ton of challenges that are worthwhile, and which often demand that we rely on each other including re/ risk of harm.
I agree with this, I think our disagreement is mainly about how much we expect to decrease this exposure before we start jettisoning the precious stuff.
I agree that this is a reasonable way to avoid some unnecessary risk, but the examples you give seem odd to me. Maintaining beliefs is often comforting because of positionality. A good reasoner should be highly willing to change their mind on anything given the right circumstances, but a good reasoner who found themselves constantly changing their mind and rarely anticipating it would start worrying about hypotheses like “I am fundamentally detached from reality and unable to reliably distinguish truth from fiction” and become quite distraught. I think this is the typical way for beliefs to be comforting and this applies to basically all beliefs, so I don’t think we can expect to avoid at least some amount of harm in most instances of vulnerability. (Of course this consideration is pretty small for most questions of fact! If my friend is wrong about which toppings are available at some pizza place, I don’t expect they would suffer much positional pain from being corrected. But of course most interactions don’t involve meaningful vulnerability from either party, which is why the small vulnerability that does exist is usually not acknowledged.)
I agree, this is why part (1) was important! Vulnerability can be used incorrectly, I’m not saying that we should pay no attention to the fact that openness induces risk. Indeed, it’s not that hard to describe types of people who consistently misuse vulnerability and cause harm. People can overshare, inappropriately disclosing information about themselves in the hope that their demonstration of vulnerability will produce a social connection, while not valuing the perspective of their counterparty enough to justify the exposure. People can also deceive (or self-deceive!), incorrectly signalling the pain they expect to experience if their perspective is challenged, either to provoke sympathy or demonstrate emotional strength. This just means that we should not be open with everyone, which is why the signal works at all.
Relying on each other is not the sort of social bond I have in mind here. Rock climbing or starting a business are excellent demonstrations of coincidence of interests or goals, and this produces some sort of social bond, but it’s not the same sort of social bond that vulnerability produces and is not a sufficient replacement, at least in my experience. Helping others and being helped in return, similarly, produces a social bond, but does not replace the need for vulnerability. These can be entryways, and indeed, most close friendships and relationships that I’m aware of began with a coincidence of interests and progressed to joint projects and mutual favors before expressions of vulnerability. But I’m not aware of any close friendships or healthy relationships (in my estimation of what “close” and “healthy” mean) that did not, at some point, involve unguarding, and as far as I can tell, this is where closeness actually begins. Raising a child together can probably produce this type of social bond, but if two (or more I guess) people consistently assess that they’re more scared of the other’s judgment than they are interested in the other’s potentially judgmental opinion on topics they care about, or if they’re only able to solicit the other’s opinion because the other person’s evaluation of them doesn’t feed into their sense of self enough that it could sting, I really really really don’t think those people should raise a child together.
(I guess I should mention the following: of course any starting point can work for any task. If you have enough foresight and are sufficiently good at weighing costs and benefits, you can start by trying to assess the appropriate amount of emotional risk and end up with a perfect policy. However, in this instance, under-risking is much worse than over-risking because it is self-insulating. In my experience, people who are too eager to demonstrate emotional vulnerability get lots of social feedback and settle to a more sustainable and healthy pace pretty quickly. Meanwhile those who are too timid can spend years and decades failing to find friendships that sustain them, and because they less often engage in the vulnerable practice of soliciting outside views from a person they care enough about to take seriously on matters of the self, they often don’t know that things can be different. We agree in principle that some amount of risk is justified but not all risk is justified, and because the evaluation of these quantities varies so much from situation-to-situation, I doubt we’ll be able to sketch out an example of explicit disagreement in enough detail to be fully sure that we disagree about the object-level best policy for the people in the example. The main reason I’m objecting this strongly is that I expect that, to a person who already under-risks, the framing and examples you provide will systematically recommend under-risking. An over-risker might apply the same framing and not end up with the same bias, but I think we should worry much less about how over-riskers will receive our advice on this topic, because over-riskers for the most part don’t need advice.)
I’ve read this a couple times and am still not following, or maybe I disagree / don’t see where you’re coming from. Are you basically saying that for most people, for most (important?) beliefs, that person has some risk of being harmed by getting those beliefs really called into question, because if that happens too much then they would question whether they can tell what’s true in general?
I agree that there’s some effect like that, though I think for many people it’s pretty weak for many important beliefs, and there are other things that people are commonly referring to when talking about vulnerability. E.g. the things I mentioned, or feeling judged and thereby subtley threatened/pressured by social reality, or other things.
But I also am not sure whether you’re putting this forward as something we disagree about. I would still say that this type of fear is itself an interesting object of investigation and self-fortification over time. In other words, my point stands about the vulnerability as such being bad.
I think you have the gist, yes, and I think we disagree about the frequency and strength of this harm. If someone I know well told me that they had something vulnerable to share, I’d understand them as saying (modulo different auto-interpretations of mental state) that they’re much more exposed to this specific type of harm than normal in the conversation they expect to follow. Of course other, more solvable forms of vulnerability exist, but the people I’m close to basically know this and know me well enough to know that I also know this, so when they disclose vulnerability, marginal improvements are usually not available. I also think (though I can’t be sure) that this effect is actually quite strong for most people and for many of their beliefs.
I should note: there are contexts where I expect marginal improvements to be available! For example, as a teacher I often need to coordinate make-up exams or lectures with students, and this is often because the students are experiencing things that are difficult to share. When vulnerability is just an obstacle to disclosure, I think I agree with you fully. I don’t think this case is typical of vulnerability.
I guess the last point of disagreement is the claim that this is something most people should try to fortify against over time. More concretely, that most people I interact with should try to fortify against this over time, on the assumptions that you accurately believe that people in your social sphere don’t experience this type of harm strongly, that I accurately believe that people in my social sphere do experience it strongly, and that if you believe most people in your sphere should tone it down, you’d believe so even more strongly for people in my sphere.
For me, this type of fear is a load-bearing component in the preservation of my personal identity, and I suspect that things are similar for most people. I don’t think it’s a coincidence that the rationalist community has very high rates of psychosis and is the only community I’m aware of that treats unusual numbness to this sort of pain as an unalloyed and universal virtue! I think most people would agree that it’s good to be able to change your mind even when it’s painful, especially when it’s painful. But for most communities, the claim that it shouldn’t be painful to change your mind on a certain subject coincides with the claim that that subject shouldn’t be a core pillar of one’s identity. The claim that it shouldn’t be painful to change your mind on any subject, that the pain is basically a cognitive flaw, albeit understandable and forgivable and common, seems unique to this community.
(Also sorry for sentence structure here, I couldn’t figure out how to word this in a maximally-readable way for some reason. Thank you for reading me closely, I appreciate the effort.)
I don’t think numbness to pain is good, or that numbness or ignoring fear is good. The “fortification” I refer to is about trusting sense of pain / suffering / fear / anger / defensiveness / flight deeply, and being “on their side”, and then actually addressing what they’re pointing at. This is a very patient process: just because you think you understood what you were afraid of and then did something to alleviate the real danger there, doesn’t mean you won’t still be afraid, and if so you, you still don’t do the thing that you’re afraid of. Maybe the fear is seeing something else additional, that’s also real, to be afraid of; or maybe you just haven’t gotten comfortable with how you addressed the fear, like you want to see it proven out more, or something. Either way, it’s patient.
That’s the opposite of what I’m saying. I’m saying try to figure out why it’s painful—what is being damaged / hurt—and then try to protect that thing even more. Then I’m saying that sometimes, when you’ve done that, it doesn’t hurt to do the thing that previously did hurt, but there’s nothing unwholesome here; rather, you’ve healed an unnecessary wound / exposure.
I don’t make that claim and haven’t written anything to that effect here.
I agree that you haven’t made that claim but I’m struggling to find an interpretation of what you’ve written that doesn’t imply it. In particular, in my model of your position, this is exactly the claim “vulnerability itself is bad (although it may accompany good things)” applied to the sort of vulnerability that is the risk of changing one’s identity-bearing beliefs. Maybe the following will help me pin down your position better:
I agree that this is a plausible procedure and sometimes works, but how often do you expect this to work? Is it plausible to you that sometimes you figure out why it’s painful, but that knowledge doesn’t make it less painful, and yet the thing you’re afraid of doing is still the thing you’re supposed to do? Or does this not happen on your model of identity risk and vulnerability?
EDIT: I guess I should mention that I’m aware this is the opposite of what you’re saying, and my understanding is that this is very nearly the opposite of the statement you disclaim at the end here. We agree that people should be able to change their minds, and that sometimes the process of changing one’s mind seems painful. So either people should be able to change their minds despite the risk of pain, or people should be able to rearrange their mind until the process is not painful, and if it’s the latter, then an especially well-arranged mind would be able to do this quickly and would not anticipate pain in the first place. I’m not sure where you disagree with this chain of reasoning and I’m not sure I see where you can.
In practice, yes, absolutely, all the time.
This is the part that is complicated and often infeasible.
Sometimes yes, sometimes no, it depends on the person and context; I would guess more than you seem to think, IDK.
But I don’t think this the crux for what I’m saying. What I’m saying, quoting from https://www.lesswrong.com/posts/fKoZmewSEwpfHj5Rg/easy-vs-hard-emotional-vulnerability?commentId=kG9MhGGPpZkjpzcva :
I agree that this is the crux but I don’t see how this is different from what we’ve been talking about? In particular, I’m trying to argue that these notions have a big intersection, and maybe even that the second kind is a subset of the first kind (there are types of openness and trust for which we can eliminate all the excess exposure to harm, but I think they’re qualitatively different from the best kinds of openness and trust; if you think the difference is not qualitative, or that it’s obviated when we consider exposure to harm correctly, then it wouldn’t be a subset.) As a concrete example, I’m trying to argue that the sort of interaction that involves honestly exposing a core belief to another person and asking for an outside perspective, with the goal of correcting that belief if it’s mistaken, is not just practically but necessarily in the intersection (it clearly requires openness and I’m trying to argue that it also requires exposure to harm for minds worth being.) Following that, I’m trying to argue that separating these concepts is a bad idea because, while this makes it easier to talk about the sorts of excess exposure we can and should eliminate, it makes it harder to recognize the exposure that we can’t or shouldn’t eliminate, and we lose more than we gain in this trade.
Ok. And, I take it, you’re not just saying “in practice, it’s often infeasible to separate the harm risk”. Yeah, my guess is we disagree about this, if by “requires” you mean “as a conceptually essential element of this kind of openness” (like how some amount of suffering might be an essential element of learning).
Actually, on second thought, even if it’s inseparable, I still want to make the original point about the conceptual difference. You still want to do separate accounting. The exposure to harm is still bad in and of itself. It’s still a cost, even if (hypothetically) you could never possibly avoid that cost. My guess is that there probably are exceptions, and the “building trust” thing is maybe sort of an exception, to “the exposure to harm is bad”—but only by having an additional separate good consequence of the exposure to harm. The exposure itself is bad! Like, I’d want to say “Ok, I’m going to do this thing, and it has some exposure to harm, and that part is bad, but the exposure has some subtle positive effects too, and also it is truly eternally inseparable from the goods, and it’s worth it overall, so I’m going to do it”. Why say all that if you’re just doing to do it anyway? Because you don’t want to get confused and think that the exposure itself is good. And I think in fact people do get confused this way, and it’s bad!
I think I have a better understanding of your position now! I’m still a bit confused by your use of the word “bad”, it seems like you’re using it to mean something other than “could meaningfully be made better”. Semantically, I don’t really know what you’re referring to when you say “the exposure itself”—the point here is that there is no such thing as the exposure itself! It is not always meaningful to split things up. There is a thing that I would call true openness and you might call something like necessary vulnerability (which you don’t necessarily need to believe exists), and that thing entails the potential for deeper social connection and the potential for emotional harm, but this just does not mean we can separate it into a connection part and a harm part. I think I’m back to my original objection basically: we should not always do goal factoring because our goals do not always factor. The point of factoring something is to break it into parts which can basically be optimized separately, with some consideration of cross-interactions, but when the cross-interactions dominate the factoring obscures the goal.
I’m also not convinced that people get confused this way? Maybe there is a way to define “bad” that makes this confusion even coherent, but I can’t think of such a way. The only way I can imagine a person endorsing the claim that the exposure itself is good is as a strong rejection of the premise that the thing that is actually good is separable from the exposure. Because, after all, if exposure under certain conditions (something like: exposure to a person I have good reason to trust, having thought about and addressed the ways it could be solvably bad, in pursuit of something I value more than I am afraid of the risk of potential pain) always corresponds with a good that is worth taking on that exposure, then every conceptually-possible version of that exposure is worth taking on net. What does it even mean to say that that category of exposure is bad if its every conceivable incarnation is net good? Maybe you can say that there’s no category difference between the sort of exposure that can be productively eliminated and the sort of exposure that can’t, but the fact that I can describe the difference between these categories seems to suggest otherwise. The only way I can see for this to fail is for the description I gave to be incoherent, which only seems possible if one of the categories is empty.
On the other hand I think many people are miscalibrated on this sort of calculation, such that they either take more or less emotional risk than they ideally would, and I explained earlier why I’m very worried about ways of thinking that tend toward underexposure and not so worried about ways that tend toward overexposure. I expect any sort of truly separate accounting to involve optimization on the risk side without consideration of the trust side, and because the effects on the trust side are subtle and harder to remember (in the sense that the sort of trust I care about is really really good in my experience, it’s the sort of thing that takes basically all of my cognition to fully experience, so when any part of my cognition is focused elsewhere I cannot accurately remember how good it is), this will tend to lose out to an unseparated approach.
(This part I have no credence to say, so feel free to dismiss it with as much prejudice as is justified, but this:
really does not seem like the sort of thought process that could properly calibrate a person’s exposure to emotional risk! My extremely strong suspicion is that a person whose thought process goes like this with any frequency, even if they end up accepting the risk often enough when they think it through, is extremely underexposed to emotional risk and does not know it because unlike overexposure, underexposure is self-reinforcing.)
EDIT: I think we’ve nailed down our disagreement about the object-level thing and we’re unlikely to come to agree on that, it seems like the remaining discussion is just about which distinctions are useful. Maybe this is the same disagreement and we’re unlikely to come to agree about this either? My preference is to talk about vulnerability by default, with the understanding vulnerability is a contingent part of certain social goods, but in some cases the vulnerability can be trimmed without infringing on the social goods, so I would talk about unnecessary or excessive vulnerability in those cases. My understanding of your preference is to talk about vulnerability by default, with the understanding that vulnerability is the (strictly bad) exposure to emotional pain that often accompanies some social interactions. But it’s at least plausible that vulnerability could be a contingent part of certain social goods, so in discussing those sorts of social goods at least as hypothetical objects, you’d refer to something like necessary vulnerability? And in cases where vulnerability could in theory be trimmed away by a sufficiently-refined self-model, but where that level of refinement is not easy to achieve and in practice the right thing to do is to proceed under the theoretically-resolvable uncertainty, something like worthwhile vulnerability? And then our disagreements in your language would be: I think that necessary vulnerability actually exists in theory, and that the set of necessary or worthwhile vulnerability is big enough that we shouldn’t separate it from primitive vulnerability, and you would take the opposite on both of those claims. Am I understanding correctly?
As an analogy, consider paying for stuff with money. (We could think about how it’s actually good that the other person gets money, because that way they can invest it more to make more stuff efficiently, which I agree with, but I’d bid to put that aside for the analogy.) From your selfish perspective, is that good or bad or what? Generally, you’d aim, and probably usually succeed, at paying for stuff when it’s net good. But that’s not how you do accounting, you still want to account for the part where you give up some money as a bad aspect of the total transaction.
I would want to point out that constructing complicated boundaries is difficult but is a worthwile task that allows you to avoid blunt force action in favor of more precise action. In this case, I’d be concerned about the fact that there’s this under/over-exposure tradeoff. To me, that says that we’re not identifying well the cases where the exposure is worthwhile or not.
Yeah, I think you’re wrong about this regarding many people, including me. I observe in myself and in other people that when we slow down and talk/think about why something is harm-exposing, they often figure out how to not be so harm-exposing while still being able to explore things more openly.