Is it plausible to you that sometimes you figure out why it’s painful, but that knowledge doesn’t make it less painful, and yet the thing you’re afraid of doing is still the thing you’re supposed to do?
In practice, yes, absolutely, all the time.
or people should be able to rearrange their mind until the process is not painful,
This is the part that is complicated and often infeasible.
I agree that this is a plausible procedure and sometimes works, but how often do you expect this to work?
Sometimes yes, sometimes no, it depends on the person and context; I would guess more than you seem to think, IDK.
It’s more that I want people to have two totally separate concepts for [vulnerability as such, i.e. exposure to harm] and [vulnerability, all that openness / unguardedness / working with tender areas / trust / reliance / doing hard things together stuff]. These things are related, as has been discussed, but separate conceptually and practically.
I agree that this is the crux but I don’t see how this is different from what we’ve been talking about? In particular, I’m trying to argue that these notions have a big intersection, and maybe even that the second kind is a subset of the first kind (there are types of openness and trust for which we can eliminate all the excess exposure to harm, but I think they’re qualitatively different from the best kinds of openness and trust; if you think the difference is not qualitative, or that it’s obviated when we consider exposure to harm correctly, then it wouldn’t be a subset.) As a concrete example, I’m trying to argue that the sort of interaction that involves honestly exposing a core belief to another person and asking for an outside perspective, with the goal of correcting that belief if it’s mistaken, is not just practically but necessarily in the intersection (it clearly requires openness and I’m trying to argue that it also requires exposure to harm for minds worth being.) Following that, I’m trying to argue that separating these concepts is a bad idea because, while this makes it easier to talk about the sorts of excess exposure we can and should eliminate, it makes it harder to recognize the exposure that we can’t or shouldn’t eliminate, and we lose more than we gain in this trade.
I’m trying to argue that it also requires exposure to harm for minds worth being
Ok. And, I take it, you’re not just saying “in practice, it’s often infeasible to separate the harm risk”. Yeah, my guess is we disagree about this, if by “requires” you mean “as a conceptually essential element of this kind of openness” (like how some amount of suffering might be an essential element of learning).
Actually, on second thought, even if it’s inseparable, I still want to make the original point about the conceptual difference. You still want to do separate accounting. The exposure to harm is still bad in and of itself. It’s still a cost, even if (hypothetically) you could never possibly avoid that cost. My guess is that there probably are exceptions, and the “building trust” thing is maybe sort of an exception, to “the exposure to harm is bad”—but only by having an additional separate good consequence of the exposure to harm. The exposure itself is bad! Like, I’d want to say “Ok, I’m going to do this thing, and it has some exposure to harm, and that part is bad, but the exposure has some subtle positive effects too, and also it is truly eternally inseparable from the goods, and it’s worth it overall, so I’m going to do it”. Why say all that if you’re just doing to do it anyway? Because you don’t want to get confused and think that the exposure itself is good. And I think in fact people do get confused this way, and it’s bad!
I think I have a better understanding of your position now! I’m still a bit confused by your use of the word “bad”, it seems like you’re using it to mean something other than “could meaningfully be made better”. Semantically, I don’t really know what you’re referring to when you say “the exposure itself”—the point here is that there is no such thing as the exposure itself! It is not always meaningful to split things up. There is a thing that I would call true openness and you might call something like necessary vulnerability (which you don’t necessarily need to believe exists), and that thing entails the potential for deeper social connection and the potential for emotional harm, but this just does not mean we can separate it into a connection part and a harm part. I think I’m back to my original objection basically: we should not always do goal factoring because our goals do not always factor. The point of factoring something is to break it into parts which can basically be optimized separately, with some consideration of cross-interactions, but when the cross-interactions dominate the factoring obscures the goal.
I’m also not convinced that people get confused this way? Maybe there is a way to define “bad” that makes this confusion even coherent, but I can’t think of such a way. The only way I can imagine a person endorsing the claim that the exposure itself is good is as a strong rejection of the premise that the thing that is actually good is separable from the exposure. Because, after all, if exposure under certain conditions (something like: exposure to a person I have good reason to trust, having thought about and addressed the ways it could be solvably bad, in pursuit of something I value more than I am afraid of the risk of potential pain) always corresponds with a good that is worth taking on that exposure, then every conceptually-possible version of that exposure is worth taking on net. What does it even mean to say that that category of exposure is bad if its every conceivable incarnation is net good? Maybe you can say that there’s no category difference between the sort of exposure that can be productively eliminated and the sort of exposure that can’t, but the fact that I can describe the difference between these categories seems to suggest otherwise. The only way I can see for this to fail is for the description I gave to be incoherent, which only seems possible if one of the categories is empty.
On the other hand I think many people are miscalibrated on this sort of calculation, such that they either take more or less emotional risk than they ideally would, and I explained earlier why I’m very worried about ways of thinking that tend toward underexposure and not so worried about ways that tend toward overexposure. I expect any sort of truly separate accounting to involve optimization on the risk side without consideration of the trust side, and because the effects on the trust side are subtle and harder to remember (in the sense that the sort of trust I care about is really really good in my experience, it’s the sort of thing that takes basically all of my cognition to fully experience, so when any part of my cognition is focused elsewhere I cannot accurately remember how good it is), this will tend to lose out to an unseparated approach.
(This part I have no credence to say, so feel free to dismiss it with as much prejudice as is justified, but this:
Ok, I’m going to do this thing, and it has some exposure to harm, and that part is bad, but the exposure has some subtle positive effects too, and also it is truly eternally inseparable from the goods, and it’s worth it overall, so I’m going to do it.
really does not seem like the sort of thought process that could properly calibrate a person’s exposure to emotional risk! My extremely strong suspicion is that a person whose thought process goes like this with any frequency, even if they end up accepting the risk often enough when they think it through, is extremely underexposed to emotional risk and does not know it because unlike overexposure, underexposure is self-reinforcing.)
EDIT: I think we’ve nailed down our disagreement about the object-level thing and we’re unlikely to come to agree on that, it seems like the remaining discussion is just about which distinctions are useful. Maybe this is the same disagreement and we’re unlikely to come to agree about this either? My preference is to talk about vulnerability by default, with the understanding vulnerability is a contingent part of certain social goods, but in some cases the vulnerability can be trimmed without infringing on the social goods, so I would talk about unnecessary or excessive vulnerability in those cases. My understanding of your preference is to talk about vulnerability by default, with the understanding that vulnerability is the (strictly bad) exposure to emotional pain that often accompanies some social interactions. But it’s at least plausible that vulnerability could be a contingent part of certain social goods, so in discussing those sorts of social goods at least as hypothetical objects, you’d refer to something like necessary vulnerability? And in cases where vulnerability could in theory be trimmed away by a sufficiently-refined self-model, but where that level of refinement is not easy to achieve and in practice the right thing to do is to proceed under the theoretically-resolvable uncertainty, something like worthwhile vulnerability? And then our disagreements in your language would be: I think that necessary vulnerability actually exists in theory, and that the set of necessary or worthwhile vulnerability is big enough that we shouldn’t separate it from primitive vulnerability, and you would take the opposite on both of those claims. Am I understanding correctly?
What does it even mean to say that that category of exposure is bad if its every conceivable incarnation is net good?
As an analogy, consider paying for stuff with money. (We could think about how it’s actually good that the other person gets money, because that way they can invest it more to make more stuff efficiently, which I agree with, but I’d bid to put that aside for the analogy.) From your selfish perspective, is that good or bad or what? Generally, you’d aim, and probably usually succeed, at paying for stuff when it’s net good. But that’s not how you do accounting, you still want to account for the part where you give up some money as a bad aspect of the total transaction.
I explained earlier why I’m very worried about ways of thinking that tend toward underexposure and not so worried about ways that tend toward overexposure.
I would want to point out that constructing complicated boundaries is difficult but is a worthwile task that allows you to avoid blunt force action in favor of more precise action. In this case, I’d be concerned about the fact that there’s this under/over-exposure tradeoff. To me, that says that we’re not identifying well the cases where the exposure is worthwhile or not.
My extremely strong suspicion is that a person whose thought process goes like this with any frequency, even if they end up accepting the risk often enough when they think it through, is extremely underexposed to emotional risk and does not know it because unlike overexposure, underexposure is self-reinforcing
Yeah, I think you’re wrong about this regarding many people, including me. I observe in myself and in other people that when we slow down and talk/think about why something is harm-exposing, they often figure out how to not be so harm-exposing while still being able to explore things more openly.
In practice, yes, absolutely, all the time.
This is the part that is complicated and often infeasible.
Sometimes yes, sometimes no, it depends on the person and context; I would guess more than you seem to think, IDK.
But I don’t think this the crux for what I’m saying. What I’m saying, quoting from https://www.lesswrong.com/posts/fKoZmewSEwpfHj5Rg/easy-vs-hard-emotional-vulnerability?commentId=kG9MhGGPpZkjpzcva :
I agree that this is the crux but I don’t see how this is different from what we’ve been talking about? In particular, I’m trying to argue that these notions have a big intersection, and maybe even that the second kind is a subset of the first kind (there are types of openness and trust for which we can eliminate all the excess exposure to harm, but I think they’re qualitatively different from the best kinds of openness and trust; if you think the difference is not qualitative, or that it’s obviated when we consider exposure to harm correctly, then it wouldn’t be a subset.) As a concrete example, I’m trying to argue that the sort of interaction that involves honestly exposing a core belief to another person and asking for an outside perspective, with the goal of correcting that belief if it’s mistaken, is not just practically but necessarily in the intersection (it clearly requires openness and I’m trying to argue that it also requires exposure to harm for minds worth being.) Following that, I’m trying to argue that separating these concepts is a bad idea because, while this makes it easier to talk about the sorts of excess exposure we can and should eliminate, it makes it harder to recognize the exposure that we can’t or shouldn’t eliminate, and we lose more than we gain in this trade.
Ok. And, I take it, you’re not just saying “in practice, it’s often infeasible to separate the harm risk”. Yeah, my guess is we disagree about this, if by “requires” you mean “as a conceptually essential element of this kind of openness” (like how some amount of suffering might be an essential element of learning).
Actually, on second thought, even if it’s inseparable, I still want to make the original point about the conceptual difference. You still want to do separate accounting. The exposure to harm is still bad in and of itself. It’s still a cost, even if (hypothetically) you could never possibly avoid that cost. My guess is that there probably are exceptions, and the “building trust” thing is maybe sort of an exception, to “the exposure to harm is bad”—but only by having an additional separate good consequence of the exposure to harm. The exposure itself is bad! Like, I’d want to say “Ok, I’m going to do this thing, and it has some exposure to harm, and that part is bad, but the exposure has some subtle positive effects too, and also it is truly eternally inseparable from the goods, and it’s worth it overall, so I’m going to do it”. Why say all that if you’re just doing to do it anyway? Because you don’t want to get confused and think that the exposure itself is good. And I think in fact people do get confused this way, and it’s bad!
I think I have a better understanding of your position now! I’m still a bit confused by your use of the word “bad”, it seems like you’re using it to mean something other than “could meaningfully be made better”. Semantically, I don’t really know what you’re referring to when you say “the exposure itself”—the point here is that there is no such thing as the exposure itself! It is not always meaningful to split things up. There is a thing that I would call true openness and you might call something like necessary vulnerability (which you don’t necessarily need to believe exists), and that thing entails the potential for deeper social connection and the potential for emotional harm, but this just does not mean we can separate it into a connection part and a harm part. I think I’m back to my original objection basically: we should not always do goal factoring because our goals do not always factor. The point of factoring something is to break it into parts which can basically be optimized separately, with some consideration of cross-interactions, but when the cross-interactions dominate the factoring obscures the goal.
I’m also not convinced that people get confused this way? Maybe there is a way to define “bad” that makes this confusion even coherent, but I can’t think of such a way. The only way I can imagine a person endorsing the claim that the exposure itself is good is as a strong rejection of the premise that the thing that is actually good is separable from the exposure. Because, after all, if exposure under certain conditions (something like: exposure to a person I have good reason to trust, having thought about and addressed the ways it could be solvably bad, in pursuit of something I value more than I am afraid of the risk of potential pain) always corresponds with a good that is worth taking on that exposure, then every conceptually-possible version of that exposure is worth taking on net. What does it even mean to say that that category of exposure is bad if its every conceivable incarnation is net good? Maybe you can say that there’s no category difference between the sort of exposure that can be productively eliminated and the sort of exposure that can’t, but the fact that I can describe the difference between these categories seems to suggest otherwise. The only way I can see for this to fail is for the description I gave to be incoherent, which only seems possible if one of the categories is empty.
On the other hand I think many people are miscalibrated on this sort of calculation, such that they either take more or less emotional risk than they ideally would, and I explained earlier why I’m very worried about ways of thinking that tend toward underexposure and not so worried about ways that tend toward overexposure. I expect any sort of truly separate accounting to involve optimization on the risk side without consideration of the trust side, and because the effects on the trust side are subtle and harder to remember (in the sense that the sort of trust I care about is really really good in my experience, it’s the sort of thing that takes basically all of my cognition to fully experience, so when any part of my cognition is focused elsewhere I cannot accurately remember how good it is), this will tend to lose out to an unseparated approach.
(This part I have no credence to say, so feel free to dismiss it with as much prejudice as is justified, but this:
really does not seem like the sort of thought process that could properly calibrate a person’s exposure to emotional risk! My extremely strong suspicion is that a person whose thought process goes like this with any frequency, even if they end up accepting the risk often enough when they think it through, is extremely underexposed to emotional risk and does not know it because unlike overexposure, underexposure is self-reinforcing.)
EDIT: I think we’ve nailed down our disagreement about the object-level thing and we’re unlikely to come to agree on that, it seems like the remaining discussion is just about which distinctions are useful. Maybe this is the same disagreement and we’re unlikely to come to agree about this either? My preference is to talk about vulnerability by default, with the understanding vulnerability is a contingent part of certain social goods, but in some cases the vulnerability can be trimmed without infringing on the social goods, so I would talk about unnecessary or excessive vulnerability in those cases. My understanding of your preference is to talk about vulnerability by default, with the understanding that vulnerability is the (strictly bad) exposure to emotional pain that often accompanies some social interactions. But it’s at least plausible that vulnerability could be a contingent part of certain social goods, so in discussing those sorts of social goods at least as hypothetical objects, you’d refer to something like necessary vulnerability? And in cases where vulnerability could in theory be trimmed away by a sufficiently-refined self-model, but where that level of refinement is not easy to achieve and in practice the right thing to do is to proceed under the theoretically-resolvable uncertainty, something like worthwhile vulnerability? And then our disagreements in your language would be: I think that necessary vulnerability actually exists in theory, and that the set of necessary or worthwhile vulnerability is big enough that we shouldn’t separate it from primitive vulnerability, and you would take the opposite on both of those claims. Am I understanding correctly?
As an analogy, consider paying for stuff with money. (We could think about how it’s actually good that the other person gets money, because that way they can invest it more to make more stuff efficiently, which I agree with, but I’d bid to put that aside for the analogy.) From your selfish perspective, is that good or bad or what? Generally, you’d aim, and probably usually succeed, at paying for stuff when it’s net good. But that’s not how you do accounting, you still want to account for the part where you give up some money as a bad aspect of the total transaction.
I would want to point out that constructing complicated boundaries is difficult but is a worthwile task that allows you to avoid blunt force action in favor of more precise action. In this case, I’d be concerned about the fact that there’s this under/over-exposure tradeoff. To me, that says that we’re not identifying well the cases where the exposure is worthwhile or not.
Yeah, I think you’re wrong about this regarding many people, including me. I observe in myself and in other people that when we slow down and talk/think about why something is harm-exposing, they often figure out how to not be so harm-exposing while still being able to explore things more openly.