Ok, I was probably not going to write the post anyway, but since no one seems to actively want it, your insistence that it requires this much extra care is enough to dissuade me.
I will say, though, that you may be committing a typical mind fallacy when you say “convincing is >>> costly to complying to the request” in your reply to Zack Davis’ comment. I personally dislike doing this kind of lit-review style research because in my experience it’s a lot of trudging through bullshit with little payoff, especially in fields like social psychology, and especially when the only guidance I get is “ask ChatGPT for related Buddhist texts”. I don’t like using ChatGPT (or LLMs in general; it’s a weakness of mine I admit). Maybe after a few years of capabilities advances that will change.
And it seems that I was committing a typical mind fallacy as well, since I implicitly thought that when you said “this topic has been covered extensively” you had specific writings in mind, and that all you needed to do was retrieve them and link them. I now realize that this assumption was incorrect, and I’m sorry for making it. It is clear now that I underestimated the cost that would be incurred by you in order to convince me to do said research before making a post.
I hope this concept gets discussed more in places like Lesswrong someday, because I think that there may be a lot of good we can do in preventing this kind of suffering, and the first step to solving a problem is pointing at it. But it seems like now is not the time and/or I am not the person to do that.
Thank you for this very kind comment! I would like to talk in more detail about what was going on for me here, because while your assumptions are kindly framed, they’re not quite accurate, and I think understanding a bit more about how I’m thinking about this might help.
The issue is not that I can’t easily think of things that look relevant/useful to me on this topic; the issue is that the language you’re using to describe the phenomenon is so different from the language used to describe it in the past that I would be staking the credibility of my caution entirely on whether you were equipped to recognizenearby ideas in an unfamiliar form — a form against which you already have some (justified!) bias. That’s why it would be so much work! I can’t know in advance if the Buddhist or Freudian or IFS or DBT or CBT or MHC framing of this kind of thing would immediately jump out to you as clearly relevant, or would help demonstrate the danger/power in the idea, much less equip you with the tools to talk about it in a manner that was sensitive enough by my lights.
So recommending asking ChatGPT wasn’t just lazily pointing at the lowest hanging fruit; the Conceptual-Rounding-Error-Generator would be extremely helpful in offering you a pretty quick survey of relevant materials by squinting at your language and offering a heap of nearby and not-so-nearby analogs. You could then pick the thing that you thought was most relevant or exciting, read a bit about it, and then look into cautions related to that idea (or infer them yourself), then generalize back to your own flavor of this type of thinking.
It’s simply not instructive or useful for me to try to cram your thought into my frame and then insist you think about it This Specific Way. Instead, noticing that all (or most) past-plausibly-related-thoughts (and, in particular, the thoughts that you consider nearest to your own) come with risks and disclaimers would naturally inspire you to take the next step and do the careful, sensitive thing in rendering the idea.
This is a hard dynamic to gesture at, and I did try to get it across earlier, but the specific questions I was being asked (and felt obligated to reply to) felt like attempts at taking short cuts that misunderstood the situation as something much simpler (e.g. ‘William could just tell me what to look at but he’s being lazy and not doing it’ or ‘William actually doesn’t have anything in mind and is just being mean for no reason’).
Hence my response of behaving unreasonably / embarrassing myself as a method of rendering a more costly signal. I did try to keep this from being outright discouraging, and hoped continuing to respond would generate some signal toward ‘I’m invested in this going well and not just bidding to shut you down outright.’
I think you should think more about this idea, and get more comfortable with shittier parts of connecting your ideas to broader conversations.
Ok, I was probably not going to write the post anyway, but since no one seems to actively want it, your insistence that it requires this much extra care is enough to dissuade me.
I will say, though, that you may be committing a typical mind fallacy when you say “convincing is >>> costly to complying to the request” in your reply to Zack Davis’ comment. I personally dislike doing this kind of lit-review style research because in my experience it’s a lot of trudging through bullshit with little payoff, especially in fields like social psychology, and especially when the only guidance I get is “ask ChatGPT for related Buddhist texts”. I don’t like using ChatGPT (or LLMs in general; it’s a weakness of mine I admit). Maybe after a few years of capabilities advances that will change.
And it seems that I was committing a typical mind fallacy as well, since I implicitly thought that when you said “this topic has been covered extensively” you had specific writings in mind, and that all you needed to do was retrieve them and link them. I now realize that this assumption was incorrect, and I’m sorry for making it. It is clear now that I underestimated the cost that would be incurred by you in order to convince me to do said research before making a post.
I hope this concept gets discussed more in places like Lesswrong someday, because I think that there may be a lot of good we can do in preventing this kind of suffering, and the first step to solving a problem is pointing at it. But it seems like now is not the time and/or I am not the person to do that.
Thank you for this very kind comment! I would like to talk in more detail about what was going on for me here, because while your assumptions are kindly framed, they’re not quite accurate, and I think understanding a bit more about how I’m thinking about this might help.
The issue is not that I can’t easily think of things that look relevant/useful to me on this topic; the issue is that the language you’re using to describe the phenomenon is so different from the language used to describe it in the past that I would be staking the credibility of my caution entirely on whether you were equipped to recognize nearby ideas in an unfamiliar form — a form against which you already have some (justified!) bias. That’s why it would be so much work! I can’t know in advance if the Buddhist or Freudian or IFS or DBT or CBT or MHC framing of this kind of thing would immediately jump out to you as clearly relevant, or would help demonstrate the danger/power in the idea, much less equip you with the tools to talk about it in a manner that was sensitive enough by my lights.
So recommending asking ChatGPT wasn’t just lazily pointing at the lowest hanging fruit; the Conceptual-Rounding-Error-Generator would be extremely helpful in offering you a pretty quick survey of relevant materials by squinting at your language and offering a heap of nearby and not-so-nearby analogs. You could then pick the thing that you thought was most relevant or exciting, read a bit about it, and then look into cautions related to that idea (or infer them yourself), then generalize back to your own flavor of this type of thinking.
It’s simply not instructive or useful for me to try to cram your thought into my frame and then insist you think about it This Specific Way. Instead, noticing that all (or most) past-plausibly-related-thoughts (and, in particular, the thoughts that you consider nearest to your own) come with risks and disclaimers would naturally inspire you to take the next step and do the careful, sensitive thing in rendering the idea.
This is a hard dynamic to gesture at, and I did try to get it across earlier, but the specific questions I was being asked (and felt obligated to reply to) felt like attempts at taking short cuts that misunderstood the situation as something much simpler (e.g. ‘William could just tell me what to look at but he’s being lazy and not doing it’ or ‘William actually doesn’t have anything in mind and is just being mean for no reason’).
Hence my response of behaving unreasonably / embarrassing myself as a method of rendering a more costly signal. I did try to keep this from being outright discouraging, and hoped continuing to respond would generate some signal toward ‘I’m invested in this going well and not just bidding to shut you down outright.’
I think you should think more about this idea, and get more comfortable with shittier parts of connecting your ideas to broader conversations.