I think I have a better understanding of your position now! I’m still a bit confused by your use of the word “bad”, it seems like you’re using it to mean something other than “could meaningfully be made better”. Semantically, I don’t really know what you’re referring to when you say “the exposure itself”—the point here is that there is no such thing as the exposure itself! It is not always meaningful to split things up. There is a thing that I would call true openness and you might call something like necessary vulnerability (which you don’t necessarily need to believe exists), and that thing entails the potential for deeper social connection and the potential for emotional harm, but this just does not mean we can separate it into a connection part and a harm part. I think I’m back to my original objection basically: we should not always do goal factoring because our goals do not always factor. The point of factoring something is to break it into parts which can basically be optimized separately, with some consideration of cross-interactions, but when the cross-interactions dominate the factoring obscures the goal.
I’m also not convinced that people get confused this way? Maybe there is a way to define “bad” that makes this confusion even coherent, but I can’t think of such a way. The only way I can imagine a person endorsing the claim that the exposure itself is good is as a strong rejection of the premise that the thing that is actually good is separable from the exposure. Because, after all, if exposure under certain conditions (something like: exposure to a person I have good reason to trust, having thought about and addressed the ways it could be solvably bad, in pursuit of something I value more than I am afraid of the risk of potential pain) always corresponds with a good that is worth taking on that exposure, then every conceptually-possible version of that exposure is worth taking on net. What does it even mean to say that that category of exposure is bad if its every conceivable incarnation is net good? Maybe you can say that there’s no category difference between the sort of exposure that can be productively eliminated and the sort of exposure that can’t, but the fact that I can describe the difference between these categories seems to suggest otherwise. The only way I can see for this to fail is for the description I gave to be incoherent, which only seems possible if one of the categories is empty.
On the other hand I think many people are miscalibrated on this sort of calculation, such that they either take more or less emotional risk than they ideally would, and I explained earlier why I’m very worried about ways of thinking that tend toward underexposure and not so worried about ways that tend toward overexposure. I expect any sort of truly separate accounting to involve optimization on the risk side without consideration of the trust side, and because the effects on the trust side are subtle and harder to remember (in the sense that the sort of trust I care about is really really good in my experience, it’s the sort of thing that takes basically all of my cognition to fully experience, so when any part of my cognition is focused elsewhere I cannot accurately remember how good it is), this will tend to lose out to an unseparated approach.
(This part I have no credence to say, so feel free to dismiss it with as much prejudice as is justified, but this:
Ok, I’m going to do this thing, and it has some exposure to harm, and that part is bad, but the exposure has some subtle positive effects too, and also it is truly eternally inseparable from the goods, and it’s worth it overall, so I’m going to do it.
really does not seem like the sort of thought process that could properly calibrate a person’s exposure to emotional risk! My extremely strong suspicion is that a person whose thought process goes like this with any frequency, even if they end up accepting the risk often enough when they think it through, is extremely underexposed to emotional risk and does not know it because unlike overexposure, underexposure is self-reinforcing.)
EDIT: I think we’ve nailed down our disagreement about the object-level thing and we’re unlikely to come to agree on that, it seems like the remaining discussion is just about which distinctions are useful. Maybe this is the same disagreement and we’re unlikely to come to agree about this either? My preference is to talk about vulnerability by default, with the understanding vulnerability is a contingent part of certain social goods, but in some cases the vulnerability can be trimmed without infringing on the social goods, so I would talk about unnecessary or excessive vulnerability in those cases. My understanding of your preference is to talk about vulnerability by default, with the understanding that vulnerability is the (strictly bad) exposure to emotional pain that often accompanies some social interactions. But it’s at least plausible that vulnerability could be a contingent part of certain social goods, so in discussing those sorts of social goods at least as hypothetical objects, you’d refer to something like necessary vulnerability? And in cases where vulnerability could in theory be trimmed away by a sufficiently-refined self-model, but where that level of refinement is not easy to achieve and in practice the right thing to do is to proceed under the theoretically-resolvable uncertainty, something like worthwhile vulnerability? And then our disagreements in your language would be: I think that necessary vulnerability actually exists in theory, and that the set of necessary or worthwhile vulnerability is big enough that we shouldn’t separate it from primitive vulnerability, and you would take the opposite on both of those claims. Am I understanding correctly?
What does it even mean to say that that category of exposure is bad if its every conceivable incarnation is net good?
As an analogy, consider paying for stuff with money. (We could think about how it’s actually good that the other person gets money, because that way they can invest it more to make more stuff efficiently, which I agree with, but I’d bid to put that aside for the analogy.) From your selfish perspective, is that good or bad or what? Generally, you’d aim, and probably usually succeed, at paying for stuff when it’s net good. But that’s not how you do accounting, you still want to account for the part where you give up some money as a bad aspect of the total transaction.
I explained earlier why I’m very worried about ways of thinking that tend toward underexposure and not so worried about ways that tend toward overexposure.
I would want to point out that constructing complicated boundaries is difficult but is a worthwile task that allows you to avoid blunt force action in favor of more precise action. In this case, I’d be concerned about the fact that there’s this under/over-exposure tradeoff. To me, that says that we’re not identifying well the cases where the exposure is worthwhile or not.
My extremely strong suspicion is that a person whose thought process goes like this with any frequency, even if they end up accepting the risk often enough when they think it through, is extremely underexposed to emotional risk and does not know it because unlike overexposure, underexposure is self-reinforcing
Yeah, I think you’re wrong about this regarding many people, including me. I observe in myself and in other people that when we slow down and talk/think about why something is harm-exposing, they often figure out how to not be so harm-exposing while still being able to explore things more openly.
I think I have a better understanding of your position now! I’m still a bit confused by your use of the word “bad”, it seems like you’re using it to mean something other than “could meaningfully be made better”. Semantically, I don’t really know what you’re referring to when you say “the exposure itself”—the point here is that there is no such thing as the exposure itself! It is not always meaningful to split things up. There is a thing that I would call true openness and you might call something like necessary vulnerability (which you don’t necessarily need to believe exists), and that thing entails the potential for deeper social connection and the potential for emotional harm, but this just does not mean we can separate it into a connection part and a harm part. I think I’m back to my original objection basically: we should not always do goal factoring because our goals do not always factor. The point of factoring something is to break it into parts which can basically be optimized separately, with some consideration of cross-interactions, but when the cross-interactions dominate the factoring obscures the goal.
I’m also not convinced that people get confused this way? Maybe there is a way to define “bad” that makes this confusion even coherent, but I can’t think of such a way. The only way I can imagine a person endorsing the claim that the exposure itself is good is as a strong rejection of the premise that the thing that is actually good is separable from the exposure. Because, after all, if exposure under certain conditions (something like: exposure to a person I have good reason to trust, having thought about and addressed the ways it could be solvably bad, in pursuit of something I value more than I am afraid of the risk of potential pain) always corresponds with a good that is worth taking on that exposure, then every conceptually-possible version of that exposure is worth taking on net. What does it even mean to say that that category of exposure is bad if its every conceivable incarnation is net good? Maybe you can say that there’s no category difference between the sort of exposure that can be productively eliminated and the sort of exposure that can’t, but the fact that I can describe the difference between these categories seems to suggest otherwise. The only way I can see for this to fail is for the description I gave to be incoherent, which only seems possible if one of the categories is empty.
On the other hand I think many people are miscalibrated on this sort of calculation, such that they either take more or less emotional risk than they ideally would, and I explained earlier why I’m very worried about ways of thinking that tend toward underexposure and not so worried about ways that tend toward overexposure. I expect any sort of truly separate accounting to involve optimization on the risk side without consideration of the trust side, and because the effects on the trust side are subtle and harder to remember (in the sense that the sort of trust I care about is really really good in my experience, it’s the sort of thing that takes basically all of my cognition to fully experience, so when any part of my cognition is focused elsewhere I cannot accurately remember how good it is), this will tend to lose out to an unseparated approach.
(This part I have no credence to say, so feel free to dismiss it with as much prejudice as is justified, but this:
really does not seem like the sort of thought process that could properly calibrate a person’s exposure to emotional risk! My extremely strong suspicion is that a person whose thought process goes like this with any frequency, even if they end up accepting the risk often enough when they think it through, is extremely underexposed to emotional risk and does not know it because unlike overexposure, underexposure is self-reinforcing.)
EDIT: I think we’ve nailed down our disagreement about the object-level thing and we’re unlikely to come to agree on that, it seems like the remaining discussion is just about which distinctions are useful. Maybe this is the same disagreement and we’re unlikely to come to agree about this either? My preference is to talk about vulnerability by default, with the understanding vulnerability is a contingent part of certain social goods, but in some cases the vulnerability can be trimmed without infringing on the social goods, so I would talk about unnecessary or excessive vulnerability in those cases. My understanding of your preference is to talk about vulnerability by default, with the understanding that vulnerability is the (strictly bad) exposure to emotional pain that often accompanies some social interactions. But it’s at least plausible that vulnerability could be a contingent part of certain social goods, so in discussing those sorts of social goods at least as hypothetical objects, you’d refer to something like necessary vulnerability? And in cases where vulnerability could in theory be trimmed away by a sufficiently-refined self-model, but where that level of refinement is not easy to achieve and in practice the right thing to do is to proceed under the theoretically-resolvable uncertainty, something like worthwhile vulnerability? And then our disagreements in your language would be: I think that necessary vulnerability actually exists in theory, and that the set of necessary or worthwhile vulnerability is big enough that we shouldn’t separate it from primitive vulnerability, and you would take the opposite on both of those claims. Am I understanding correctly?
As an analogy, consider paying for stuff with money. (We could think about how it’s actually good that the other person gets money, because that way they can invest it more to make more stuff efficiently, which I agree with, but I’d bid to put that aside for the analogy.) From your selfish perspective, is that good or bad or what? Generally, you’d aim, and probably usually succeed, at paying for stuff when it’s net good. But that’s not how you do accounting, you still want to account for the part where you give up some money as a bad aspect of the total transaction.
I would want to point out that constructing complicated boundaries is difficult but is a worthwile task that allows you to avoid blunt force action in favor of more precise action. In this case, I’d be concerned about the fact that there’s this under/over-exposure tradeoff. To me, that says that we’re not identifying well the cases where the exposure is worthwhile or not.
Yeah, I think you’re wrong about this regarding many people, including me. I observe in myself and in other people that when we slow down and talk/think about why something is harm-exposing, they often figure out how to not be so harm-exposing while still being able to explore things more openly.