I’m very glad that you managed to train yourself to do that but this option is not available for everyone. I see a lot of engaging the details and giving singular instances of something not occurring, but I don’t see a lot of engaging in the least convenient possible world. As I was writing this reply it became longer and longer, so I decided to rewrite it and make it it’s own post. You can check out some more inconvenient counterexamples I thought up here. [Edit: I saved the post to draft by accident. I didn’t want to reupload it, but if we ever get a way to have ‘unlisted’ posts I will upload it unlisted. Until that time I have changed the link so you can still see my post and the comments it received]
I’m very glad that you managed to train yourself to do that but this option is not available for everyone.
Do you have any evidence for this statement? That seems like an awfully quick dismissal given that twice in a row you cited things as if they countered my point when they actually missed the point completely. Both epistemically and instrumentally, it might make sense to update the probability you assign to “maybe I’m missing something here” . I’m not asking you to be more credulous or to simply believe anything I’m saying, mind you, but maybe a bit more skeptical and a little less credulous of your own ideas, at least until that stops happening.
Because you do have that option available to you. In my experience, it’s simply not true that attempts at self deception ever give better results than simply noticing false beliefs and then letting them go once you do, or that anyone ever says “that’s a great idea, let’s do that!” and then mysteriously fails. The idea that it’s “not available” is one more false belief that gets in the way of focusing on the right thing.
Don’t get me wrong, I’m not saying that it’s always trivial. Epistemic rationality is not trivial. It’s completely possible to try to organize one’s mind into coherence and still fail to get the results because you don’t realize where you’re missing something. Heck, in the last example I gave my friend did just that. Still, at the end of the day, she got her results, and is she a much happier and more competent person than she was years back when her mind was still caught up on more well-meaning self deceptions.
Well, if I don’t think any valid examples exist, all I can do is knock over the ones you show me. Perhaps you can make your examples a little less convenient to knock over and put me to a better test then. ;)
[This is condensed and informalized from a much longer and more explicit comment which I’m not sure would have been worth wading through, which still seemed hazy in important ways, and which seemed like it needed me to open more boxes than I have energy for right now. This one still seems hazy, but hopefully it wears it more on its sleeve. I should also declare up front that I have a bunch of weird emotional warping around this topic; hopefully I’m working around enough of it for this to still be useful.]
I think you’re interpreting “this is not how human psychology works” in a noncentral way compared to how Bob Jacobs is likely to have meant it, or maybe asserting your examples of psychology working that way more as normative than as positive claims.
I have a completely different tack in mind: how do we know that the sort of mental maneuvers you describe don’t become harmful in their aggregate effects when too many people do them, or do them without coordinating enough, or something along those lines?
I would like to point out the following:
The person literally could not process what she said because it was so far from what he was expecting, and she felt foolish for saying it. Her injury then swelled up, even though it had already been a while since the break.
“felt foolish” together with the consequences looks like a description of an alief-based and alief-affecting social feedback mechanism. How safe is it for individuals to unilaterally train themselves out of such mechanisms? Some detachment from it seems to be part of emotional maturity, but that’s coupled with a lot of other mediating material. Further detachment seems to be part of various spiritual traditions—also coupled with even more mediating material. That’s not very promising for any implied “there’s no such thing as too much”.
More specifically, I would like to consider the possibility that many of the sort of false aliefs you’re talking about act more like restraining bolts imposed by the social cohesion subunit of a human mind, “because” humans are not safe under amplification with regard to social values. (And notably, “hypercompetent individuals are good for society” is by no means a universal more.)
Or: individual coherence and social cohesion seem to be at odds often enough for that to be a way for “not-winning due to being too coherent” to sneak in through crazy backdoors in the environment, absent unbounded handling-of-detachment resources which are not in evidence and at some point may be unimplementable within human bounds.
[… this was going to be more-edited, but I’ve accidentally hit Submit, and I don’t want to do too much frantic editing, so I’ve just cleaned up a few pieces. I think this is still just-about worth enough to leave up; I’ll try to come back to it later if it’s deemed worth talking about.]
I should also declare up front that I have a bunch of weird emotional warping around this topic; hopefully I’m working around enough of it for this to still be useful.]
This is a really cool declaration. It doesn’t bleed through in any obvious way, but thanks for letting me know and I’ll try to be cautious of what I say/how I say them. Lemme know if I’m bumping into anything or if there’s anything I could be doing differently to better accommodate.
I think you’re interpreting “this is not how human psychology works” in a noncentral way compared to how Bob Jacobs is likely to have meant it, or maybe asserting your examples of psychology working that way more as normative than as positive claims.
I’m not really sure what you mean here, but I can address what you say below. I’m not sure if it’s related?
“felt foolish” together with the consequences looks like a description of an alief-based and alief-affecting social feedback mechanism. How safe is it for individuals to unilaterally train themselves out of such mechanisms?
Depends on how you go about it and what type of risk you’re trying to avoid. When I first started playing with this stuff I taught someone how to “turn off” pain, and in her infinite wisdom she used this new ability to make it easier to be stubborn and run on a sprained ankle. There’s no foolproof solution to make this never happen (in my infinite wisdom I’ve done similar things even with the pain), but the way I go about it now is explicitly mindful of the risks and uses that to get more reliable results. With the swelling, for example, part of my indignant reaction was “it doesn’t have to swell up, I just won’t move it”.
When you’ve seen something happen with your own eyes multiple times, I think that’s beyond the level where you should be foolish for thinking that it might be possible. When you see that the thing that is stopping other people from doing it too is ignorance of the possibility rather than an objection that it shouldn’t be done, then “thinking it through and making your reasoned best guess” isn’t going to be right all the time, but according to your own best guess it will be right more often than the alternative.
Or: individual coherence and social cohesion seem to be at odds often enough for that to be a way for “not-winning due to being too coherent” to sneak in through crazy backdoors in the environment, absent unbounded handling-of-detachment resources which are not in evidence and at some point may be unimplementable within human bounds.
It seems that this bit is your main concern?
It can be a real concern. More than once I’ve had people express concern about how it has become harder to relate with their old friends after spending a lot of time with me. It’s not because of stuff like “I can consciously prevent a lot of swelling, and they don’t know how to engage with that” but rather because stuff like “it’s hard to be supportive of what I now see as clearly bad behavior that attempt to shirk reality to protect feelings and inevitably ends up hurting everyone involved”. In my experience, it’s a consequence of being able to see the problems in the group before being able to see what to do about it.
I don’t seem to have that problem anymore, and I think it’s because of the thought that I’ve put into figuring out how to actually change how people organize their minds. Saying “here, let me use math and statistics to show you why you’re definitely completely wrong” can work to smash through dumb ideas, but then even when you succeed you’re left with people seeing their old ideas (and therefore the ideas of the rest of their social circle) as “dumb” and hard to relate to. When you say “here, let me empathize and understand where you’re coming from, and then address it by showing how things look to me”, and go out of your way to make their former point of view understandable, then you no longer get this failure mode. On top of that, by showing them how to connect with people who hold very different (and often less well thought out) views than you, it gives them a model to follow that can make connecting with others easier. My friend in the above example, for instance, went from sort of a “socially awkward nerd” type to a someone who can turn that off and be really effective when she puts her mind to it. If someone is depressed and not even his siblings can get him to talk, he’ll still talk to her.
If there’s a group of people you want to be able to relate to effectively, you can’t just dissociate off into your own little world where you give no thought to their perspectives, but neither can you just melt in and let your own perspective become that social consensus, because if you don’t retain enough separation that you can at least have your own thoughts and think about whether they might be better and how best to merge them with the group, then you’re just shirking your leadership responsibilities, and if enough people do this the whole group can become detached from reality and led by whomever wants to command the mob. This doesn’t tend to lead to great things.
I’ve put a few cycles into trying to come up with a better way to point at the thing/model I’m thinking of. (I say “thing/model” because in the domain of social psychology especially, Strange Loops between a phenomenon and people’s models of the phenomenon cause them to not be that cleanly separable. Is there a word for that that I’m missing?) I haven’t gotten through much of it, but in the meantime, I’ve also just noticed that a recent second-level comment by Vaniver on their own “How alienated should you be?” post has description that seems to come from a similar observation/interpretation of the world to the part of mine I’m trying to point at, and the main post goes into more detail. So that may help. I think there is a streak of variants of this idea in LW already, and it’s possible that what I really want to do is go through the archives and find the best-aligned existing posts on the subject to link to…
I think I get the general idea of the thing you and Vaniver are gesturing at, but not what you’re trying to say about it in particular. I think I’m less concerned though, because I don’t see inter agent value differences and the resulting conflict as some fundamental inextricable part of the system.
Perhaps it makes sense to talk about the individual level first. I saw a comment recently where the person making it was sorta mocking the idea of psychological “defense mechanisms”, because “*obviously* evolution wouldn’t select for those who ‘defend’ from threats by sticking their heads in the sand!”—as if the problem of wireheading were as simple as competition between a “gene for wireheading” and a gene against. Evolution is going to select for genes that make people flinch away from injuring themselves with hot stoves. It’s also going to select for people who cauterize their wounds when necessary to keep from bleeding out. Designing an organism that does *both* is not trivial. If sensitivity to pain is too low, you get careless burns. If it’s too high, you get refusal to cauterize. You need *some* mechanism to distinguish between effective flinches and harmful flinches, and a way to enact mostly the former. “Defense mechanisms” arise not out of mysterious propagation of fitness reducing genes, but rather the lack of solution to the hard problem of separating the effective flinches from the ineffective—and sometimes even the easiest solution to these ineffective flinches is hacked together out of more flinches, such as screaming and biting down on a stick when having a wound cauterized, or choosing to take pain killers.
The solution of “simply noticing that the pain from cauterizing a serious bleed isn’t a *bad* thing and therefore not flinching from it” isn’t trivial. It’s *doable*, and to be aspired to, but there’s no such thing as “a gene for wise decisions” that is already “hard coded in DNA”.
Similarly, society is incoherent and fragmented and flinches and cooperates imperfectly. You get petty criminals and cronyism and censorship of thought and expression, and all sorts of terrible stuff. This isn’t proof of some sort of “selection for shittiness” any more than it is to notice individual incoherence and the resulting dysfunction. It’s not that coherence is impossible or undesirable, just that you’re fighting entropy to get there, and succeeding takes work.
The desire to eat marshmallows succeeds more if it can cooperate and willingly lose for five minutes until the second marshmallow comes. The individual succeeds more if they are capable of giving back to others as a means to foster cooperation. Sometimes the system is so dysfunctional that saying “no thanks, I can wait” will get you taken advantage of, and so the individually winning thing is impulsive selfishness. Even then, the guy failing to follow through on promises of second marshmallows likely isn’t winning by disincentivizing cooperation with him, and it’s likely more of a “his desire to not feel pain is winning, so he bleeds” sort of situation. Sometimes the system really is so dysfunctional that not only is it winning to take the first marshmallow, it’s also winning to renege on your promises to give the second. But for every time someone wins by shrinking the total pie and taking a bigger piece, there’s an allocation of the more cooperative pie that would give this would-be-defector more pie while still having more for everyone else too. And whoever can find these alternatives can get themselves more pie.
I don’t see negative sum conflict between the individual and society as *inevitable*, just difficult to avoid. It’s negotiation that is inevitable, and done poorly it brings lossy conflict. When Vaniver talks about society saying “shut up and be a cog”, I see a couple things happening simultaneously to one degree or another. One is a dysfunctional society hurting themselves by wasting individual potential that they could be profiting from, and would love to if only they could see how and implement it. The other is a society functioning more or less as intended and using “shut up and be a cog” as a shit test to filter out the leaders who don’t have what it takes to say “nah, I think I’ll trust myself and win more”, and lead effectively. Just like the burning pain, it’s there for a reason and how to calibrate it so that it gets overridden at only and all the right times is a bit of an empirical balancing act. It’s not perfect as is, but neither is it without function. The incentive for everyone to improve this balancing is still there, and selection on the big scale is for coherence.
And as a result, I don’t really feel myself being pulled between a conflict of “respect societies stupid beliefs/rules” and “care about other people”. I see people as a combination of *wanting* me to pass their shit tests and show them a better replacement for their stupid beliefs/rules, being afraid and unsure of what to do if I succeed, and selfishly trying to shrink the size of the pie so that they can keep what they think will be the bigger piece. As a result, it makes me want to rise to the occasion and help people face new and more accurate beliefs, and also to create common knowledge of defection when it happens and rub their noses in it to make it clear that those who work to make the pie smaller will get less pie. Sometimes it’s more rewarding and higher leverage to run off and gain some momentum by creating and then expanding a small bubble where things actually *work*, but there’s no reason to go from “I can’t yet be effective in the broader community because I can’t yet break out of their ‘cog’ mold for me, so I’m going to focus on the smaller community where I can” to “fuck them all”. There’s still plenty of value in reengaging when capable and pretending there isn’t isn’t that good functional thing we’re striving to do. It’s not like we can *actually* form a bubble and reject the outside world, because the outside world will still bring you pandemics and AI, and from even a selfish perspective there’s plenty of incentive to help things go well for everyone.
You could probably write the same answer without the snark. Your study on placebo only mentions it working on IBS patients so its not the grand dismissal of placebo that you claim it is, but even if it was there are still plenty of similar phenomena. The easiest to adapt would be the nocebo-effect, just switch the positives with the negatives in the example and you have your nocebo argument.
There’s no snark in my comment, and I am entirely sincere. I don’t think you’re going to get a good understanding of this subject without becoming more skeptical of the conclusions you’ve already come to and becoming more curious about how things might be different than you think. It simply raises the barrier to communication high enough so as to make reaching agreement not worthwhile. If that’s not a perspective you can entertain and reason about, then I don’t think there’s much point in continuing this conversation.
If you can find another way to convey the same message that would be more acceptable to you, let me know.
I would favor a conversation where we keep attacks on the persons to an absolute minimum and focus instead on the arguments being made (addressing the person is sometimes necessary, but entirely ignoring the argument in favor of attempting to psychoanalyze a stranger on the internet is not a good way to have a philosophical discussion). Secondly I would also like to hear a counterargument to the argument I made. And thirdly I have never deleted a comment, but you appear to have double posted, shall I delete one of them?
It’s not an attack, and I would recommend not taking it as one. People make that mistake all the time, and there’s no shame in that. Heck, maybe I’m even wrong and what I’m perceiving as an error actually isn’t faulty. Learning from mistakes (if it turns out to be one) is how we get stronger.
I try to avoid making that mistake, but if you feel like I’m erring, I would rather you be comfortable pointing out what you see instead of fearing that I will take it as an attack. Conversations (philosophical and otherwise) work much more efficiently this way.
I’m sorry if it hasn’t been sufficiently clear that I’m friendly and not attacking you. I tried to make it clear by phrasing things carefully and using a smiley face, but if you can think of anything else I can do to make it clearer, let me know.
Secondly I would also like to hear an actual counterargument to the argument I made
Which one? The “it was only studying IBS” one was only studying IBS, sure. It still shows that you can do placebos without deception in the cases they studied. It’s always going to be “in the cases they’ve studied” and it’s always conceivable that if you only knew to find the right use of placebos to test, you’ll find one where it doesn’t work. However, when placebos work without deception in every case you’ve tested, the default hypothesis is no longer “well, they require deception in every case except these two weird cases that I happen to have checked”. The default hypothesis should now be “maybe they just don’t require deception at all, and if they do maybe it’s much more rare than I thought”.
I’m not sure what point the existence of nocebo makes for you, but the same principles apply there too. I’ve gotten a guy to punch a cactus right after he told me “don’t make me punch the cactus” simply by making him expect that if I told him to do it he would. Simply replace “because drugs” with “because of the way your mind works” and you can do all the same things and more.
I’m not sure how many more times I’ll be willing to address things like this though. I’m willing to move on to further detail of how this stuff works, or to address counterarguments that I hadn’t considered and are therefore surprisingly strong, but if you still just don’t buy into the general idea as worth exploring then I can agree to disagree.
And thirdly I have never deleted a comment, but you appear to have double posted, shall I delete one of them?
Yeah, it didn’t submit properly the first time and then didn’t seem to be working the second time so it ended up posting two by the time I finally got confirmation that it worked. I’d have deleted one if I could have.
Speaking of deleting things, what happened to your other post?
It’s not an attack, and I would recommend not taking it as one.
Attack is just the way in which ‘verbal arguments against X’ are often shortened to, but while it is the common way of phrasing such a thing, I agree that it is stylistically odd. I didn’t assume you had any malice in mind, I was just using it the common way but will refrain from doing so (in similar context) in the future.
Yeah, it didn’t submit properly the first time and then didn’t seem to be working the second time so it ended up posting two by the time I finally got confirmation that it worked. I’d have deleted one if I could have.
Speaking of deleting things, what happened to your other post?
Alright no problem, things like that happen all the time so I will just delete it. I described what happend to the other post here. This was one of the difficult cases where I had to balance my desire to have a record of the things (and mistakes) people (including me) said and not wanting to clog the website with low-quality (as the downvotes indicated) content (I think I found a good solution). I’m having the same dilemma right now where my genuine comments are getting voted into the negative and I’m starting to feel really bad for trying to satisfy my own personal curiosity at the expense of eating up peoples time with content they think is low quality (yes yes, I know that that doesn’t mean it is low quality per se, but it is a close enough heuristic that I’m mostly willing to stick to it). But the downvotes are very clear so while I’m disappointed that we couldn’t talk through this issue, I will no longer be eating up peoples time.
I’m having the same dilemma right now where my genuine comments are getting voted into the negative and I’m starting to feel really bad for trying to satisfy my own personal curiosity at the expense of eating up peoples time with content they think is low quality (yes yes, I know that that doesn’t mean it is low quality per se, but it is a close enough heuristic that I’m mostly willing to stick to it). But the downvotes are very clear so while I’m disappointed that we couldn’t talk through this issue, I will no longer be eating up peoples time.
The only comments of yours that I see downvoted into the negative are the two prior conversations in this thread. Were there others that are now positive again?
While I generally support the idea that it’s better to stop posting than to continue to post things which will predictably be negative karma sum, I don’t think that’s necessary here. There’s plenty of room on LW for things other than curated posts sharing novel insights, and I think working through one’s own curiosity can be good not just for the individual in question, but any other lurkers who might have the same curiosities and for the community, as bringing people up to speed is an important part of helping them learn to interact best with the community.
I think the down votes are about something else which is a lot more easily fixable. While I’m sure they were genuine, some of your comments strike me as not particularly charitable. In order to hold a productive conversation, people have to be able to build from a common understanding. The more work you put in to understanding where the other person is coming from and how it can be a coherent and reasonable stance to hold, the less effort it takes for them to communicate something that is understood. At some point, if you don’t put enough effort in you start to miss valid points which would have been easy for you to find and would be prohibitively difficult to word in a way that you wouldn’t miss.
As an example, you responded to Richard_Kenneway as if he thought you were lying despite the fact that he explicitly stated that he was not imputing any dishonesty. I’m not sure where you simply missed that part or whether you don’t believe him, but either way it is very hard to have a conversation with someone that doesn’t engage with points like this at least enough to say why they aren’t convinced. I think, with a little more effort put into understanding how your interlocutors might be making reasonable, charitable, and valid points, you will be able to avoid the down votes in the future. That’s not to say that you have to believe that they’re being reasonable/charitable/etc, or that you have to act like you do, but it’s nice to at least put in some real effort to check and give them a chance to show when they are. Because the tendency for people to fail on the side of “insufficiently charitable” is really really strong, and even when the uncharitable view is the correct one (not that common on LW), the best way to show it is often to be charitable and have it visibly not fit.
It’s a very common problem that comes up in conversation, especially when pushing into new territory. I wouldn’t sweat it.
The examples that people have given are real ones. Yours are fictional. It’s easy to make up stories of how the world would look, conditional upon any proposition whatever being true. (V cerqvpg gung ng yrnfg bar ernqre jvyy vafgnagyl erfcbaq gb guvf pynvz ol znxvat hc n fgbel va juvpu vg vf snyfr.) In this light, the “least convenient possible world” for one’s interlocutors is the most convenient possible for oneself, the one in which the point at issue is imagined to be true.
We assume it’s true, we don’t have any evidence. I could tell stories about my personal experience but you’d have no way to check them. At least saying upfront that it’s a thought experiment is keeping the debate ground neutral and allows peoples reasoning to do the work instead of their emotions. And no I would never make up a story to defend my argument, the fact that you would assume your interlocutor is being a liar without any evidence to back that up is really hampering my desire to debate you.
You made up six stories here. I was not imputing any dishonesty, only pointing out that they are fiction.
OTOH, you just said of the other stories presented here that “we don’t have any evidence”. The stories I was referring to are jimmy’s story of preventing swelling in an injured joint, and his account of Conor McGregor. These stories purport to be of real things that happened. To say that his account is no evidence of that looks very like what you took me to be doing.
I was referring to that block of text that you have encoded, I decoded it and there you state the assumption that your interlocutor will lie. And no I am assuming they are true which is why I said “we assume it’s true”. I would also keep anecdotal evidence to a minimum in this type of discussion because I would want my interlocutor to be able to check every step of my reasoning. And anecdotal evidence for a positive occurrence of phenomenon does not discount the existence of a negative occurrence. I say there exists such a thing as X and the counterargument is but this one time there was Y. Do you have any arguments as to why my counterarguments or something in a similar vein couldn’t happend?
[EDIT] Richard says he meant the encoded text to only mean that the reader thinks up, but doesn’t present the false story. This is a plausible interpretation of the text and since I can’t know which one was meant I will assume it was the more charitable one and retract these comments.
I’m very glad that you managed to train yourself to do that but this option is not available for everyone. I see a lot of engaging the details and giving singular instances of something not occurring, but I don’t see a lot of engaging in the least convenient possible world. As I was writing this reply it became longer and longer, so I decided to rewrite it and make it it’s own post. You can check out some more inconvenient counterexamples I thought up here. [Edit: I saved the post to draft by accident. I didn’t want to reupload it, but if we ever get a way to have ‘unlisted’ posts I will upload it unlisted. Until that time I have changed the link so you can still see my post and the comments it received]
Do you have any evidence for this statement? That seems like an awfully quick dismissal given that twice in a row you cited things as if they countered my point when they actually missed the point completely. Both epistemically and instrumentally, it might make sense to update the probability you assign to “maybe I’m missing something here” . I’m not asking you to be more credulous or to simply believe anything I’m saying, mind you, but maybe a bit more skeptical and a little less credulous of your own ideas, at least until that stops happening.
Because you do have that option available to you. In my experience, it’s simply not true that attempts at self deception ever give better results than simply noticing false beliefs and then letting them go once you do, or that anyone ever says “that’s a great idea, let’s do that!” and then mysteriously fails. The idea that it’s “not available” is one more false belief that gets in the way of focusing on the right thing.
Don’t get me wrong, I’m not saying that it’s always trivial. Epistemic rationality is not trivial. It’s completely possible to try to organize one’s mind into coherence and still fail to get the results because you don’t realize where you’re missing something. Heck, in the last example I gave my friend did just that. Still, at the end of the day, she got her results, and is she a much happier and more competent person than she was years back when her mind was still caught up on more well-meaning self deceptions.
Well, if I don’t think any valid examples exist, all I can do is knock over the ones you show me. Perhaps you can make your examples a little less convenient to knock over and put me to a better test then. ;)
I’ll take a look at your new post.
[This is condensed and informalized from a much longer and more explicit comment which I’m not sure would have been worth wading through, which still seemed hazy in important ways, and which seemed like it needed me to open more boxes than I have energy for right now. This one still seems hazy, but hopefully it wears it more on its sleeve. I should also declare up front that I have a bunch of weird emotional warping around this topic; hopefully I’m working around enough of it for this to still be useful.]
I think you’re interpreting “this is not how human psychology works” in a noncentral way compared to how Bob Jacobs is likely to have meant it, or maybe asserting your examples of psychology working that way more as normative than as positive claims.
I have a completely different tack in mind: how do we know that the sort of mental maneuvers you describe don’t become harmful in their aggregate effects when too many people do them, or do them without coordinating enough, or something along those lines?
I would like to point out the following:
“felt foolish” together with the consequences looks like a description of an alief-based and alief-affecting social feedback mechanism. How safe is it for individuals to unilaterally train themselves out of such mechanisms? Some detachment from it seems to be part of emotional maturity, but that’s coupled with a lot of other mediating material. Further detachment seems to be part of various spiritual traditions—also coupled with even more mediating material. That’s not very promising for any implied “there’s no such thing as too much”.
More specifically, I would like to consider the possibility that many of the sort of false aliefs you’re talking about act more like restraining bolts imposed by the social cohesion subunit of a human mind, “because” humans are not safe under amplification with regard to social values. (And notably, “hypercompetent individuals are good for society” is by no means a universal more.)
Or: individual coherence and social cohesion seem to be at odds often enough for that to be a way for “not-winning due to being too coherent” to sneak in through crazy backdoors in the environment, absent unbounded handling-of-detachment resources which are not in evidence and at some point may be unimplementable within human bounds.
[… this was going to be more-edited, but I’ve accidentally hit Submit, and I don’t want to do too much frantic editing, so I’ve just cleaned up a few pieces. I think this is still just-about worth enough to leave up; I’ll try to come back to it later if it’s deemed worth talking about.]
This is a really cool declaration. It doesn’t bleed through in any obvious way, but thanks for letting me know and I’ll try to be cautious of what I say/how I say them. Lemme know if I’m bumping into anything or if there’s anything I could be doing differently to better accommodate.
I’m not really sure what you mean here, but I can address what you say below. I’m not sure if it’s related?
Depends on how you go about it and what type of risk you’re trying to avoid. When I first started playing with this stuff I taught someone how to “turn off” pain, and in her infinite wisdom she used this new ability to make it easier to be stubborn and run on a sprained ankle. There’s no foolproof solution to make this never happen (in my infinite wisdom I’ve done similar things even with the pain), but the way I go about it now is explicitly mindful of the risks and uses that to get more reliable results. With the swelling, for example, part of my indignant reaction was “it doesn’t have to swell up, I just won’t move it”.
When you’ve seen something happen with your own eyes multiple times, I think that’s beyond the level where you should be foolish for thinking that it might be possible. When you see that the thing that is stopping other people from doing it too is ignorance of the possibility rather than an objection that it shouldn’t be done, then “thinking it through and making your reasoned best guess” isn’t going to be right all the time, but according to your own best guess it will be right more often than the alternative.
It seems that this bit is your main concern?
It can be a real concern. More than once I’ve had people express concern about how it has become harder to relate with their old friends after spending a lot of time with me. It’s not because of stuff like “I can consciously prevent a lot of swelling, and they don’t know how to engage with that” but rather because stuff like “it’s hard to be supportive of what I now see as clearly bad behavior that attempt to shirk reality to protect feelings and inevitably ends up hurting everyone involved”. In my experience, it’s a consequence of being able to see the problems in the group before being able to see what to do about it.
I don’t seem to have that problem anymore, and I think it’s because of the thought that I’ve put into figuring out how to actually change how people organize their minds. Saying “here, let me use math and statistics to show you why you’re definitely completely wrong” can work to smash through dumb ideas, but then even when you succeed you’re left with people seeing their old ideas (and therefore the ideas of the rest of their social circle) as “dumb” and hard to relate to. When you say “here, let me empathize and understand where you’re coming from, and then address it by showing how things look to me”, and go out of your way to make their former point of view understandable, then you no longer get this failure mode. On top of that, by showing them how to connect with people who hold very different (and often less well thought out) views than you, it gives them a model to follow that can make connecting with others easier. My friend in the above example, for instance, went from sort of a “socially awkward nerd” type to a someone who can turn that off and be really effective when she puts her mind to it. If someone is depressed and not even his siblings can get him to talk, he’ll still talk to her.
If there’s a group of people you want to be able to relate to effectively, you can’t just dissociate off into your own little world where you give no thought to their perspectives, but neither can you just melt in and let your own perspective become that social consensus, because if you don’t retain enough separation that you can at least have your own thoughts and think about whether they might be better and how best to merge them with the group, then you’re just shirking your leadership responsibilities, and if enough people do this the whole group can become detached from reality and led by whomever wants to command the mob. This doesn’t tend to lead to great things.
Does that address what you’re saying?
I’ve put a few cycles into trying to come up with a better way to point at the thing/model I’m thinking of. (I say “thing/model” because in the domain of social psychology especially, Strange Loops between a phenomenon and people’s models of the phenomenon cause them to not be that cleanly separable. Is there a word for that that I’m missing?) I haven’t gotten through much of it, but in the meantime, I’ve also just noticed that a recent second-level comment by Vaniver on their own “How alienated should you be?” post has description that seems to come from a similar observation/interpretation of the world to the part of mine I’m trying to point at, and the main post goes into more detail. So that may help. I think there is a streak of variants of this idea in LW already, and it’s possible that what I really want to do is go through the archives and find the best-aligned existing posts on the subject to link to…
I think I get the general idea of the thing you and Vaniver are gesturing at, but not what you’re trying to say about it in particular. I think I’m less concerned though, because I don’t see inter agent value differences and the resulting conflict as some fundamental inextricable part of the system.
Perhaps it makes sense to talk about the individual level first. I saw a comment recently where the person making it was sorta mocking the idea of psychological “defense mechanisms”, because “*obviously* evolution wouldn’t select for those who ‘defend’ from threats by sticking their heads in the sand!”—as if the problem of wireheading were as simple as competition between a “gene for wireheading” and a gene against. Evolution is going to select for genes that make people flinch away from injuring themselves with hot stoves. It’s also going to select for people who cauterize their wounds when necessary to keep from bleeding out. Designing an organism that does *both* is not trivial. If sensitivity to pain is too low, you get careless burns. If it’s too high, you get refusal to cauterize. You need *some* mechanism to distinguish between effective flinches and harmful flinches, and a way to enact mostly the former. “Defense mechanisms” arise not out of mysterious propagation of fitness reducing genes, but rather the lack of solution to the hard problem of separating the effective flinches from the ineffective—and sometimes even the easiest solution to these ineffective flinches is hacked together out of more flinches, such as screaming and biting down on a stick when having a wound cauterized, or choosing to take pain killers.
The solution of “simply noticing that the pain from cauterizing a serious bleed isn’t a *bad* thing and therefore not flinching from it” isn’t trivial. It’s *doable*, and to be aspired to, but there’s no such thing as “a gene for wise decisions” that is already “hard coded in DNA”.
Similarly, society is incoherent and fragmented and flinches and cooperates imperfectly. You get petty criminals and cronyism and censorship of thought and expression, and all sorts of terrible stuff. This isn’t proof of some sort of “selection for shittiness” any more than it is to notice individual incoherence and the resulting dysfunction. It’s not that coherence is impossible or undesirable, just that you’re fighting entropy to get there, and succeeding takes work.
The desire to eat marshmallows succeeds more if it can cooperate and willingly lose for five minutes until the second marshmallow comes. The individual succeeds more if they are capable of giving back to others as a means to foster cooperation. Sometimes the system is so dysfunctional that saying “no thanks, I can wait” will get you taken advantage of, and so the individually winning thing is impulsive selfishness. Even then, the guy failing to follow through on promises of second marshmallows likely isn’t winning by disincentivizing cooperation with him, and it’s likely more of a “his desire to not feel pain is winning, so he bleeds” sort of situation. Sometimes the system really is so dysfunctional that not only is it winning to take the first marshmallow, it’s also winning to renege on your promises to give the second. But for every time someone wins by shrinking the total pie and taking a bigger piece, there’s an allocation of the more cooperative pie that would give this would-be-defector more pie while still having more for everyone else too. And whoever can find these alternatives can get themselves more pie.
I don’t see negative sum conflict between the individual and society as *inevitable*, just difficult to avoid. It’s negotiation that is inevitable, and done poorly it brings lossy conflict. When Vaniver talks about society saying “shut up and be a cog”, I see a couple things happening simultaneously to one degree or another. One is a dysfunctional society hurting themselves by wasting individual potential that they could be profiting from, and would love to if only they could see how and implement it. The other is a society functioning more or less as intended and using “shut up and be a cog” as a shit test to filter out the leaders who don’t have what it takes to say “nah, I think I’ll trust myself and win more”, and lead effectively. Just like the burning pain, it’s there for a reason and how to calibrate it so that it gets overridden at only and all the right times is a bit of an empirical balancing act. It’s not perfect as is, but neither is it without function. The incentive for everyone to improve this balancing is still there, and selection on the big scale is for coherence.
And as a result, I don’t really feel myself being pulled between a conflict of “respect societies stupid beliefs/rules” and “care about other people”. I see people as a combination of *wanting* me to pass their shit tests and show them a better replacement for their stupid beliefs/rules, being afraid and unsure of what to do if I succeed, and selfishly trying to shrink the size of the pie so that they can keep what they think will be the bigger piece. As a result, it makes me want to rise to the occasion and help people face new and more accurate beliefs, and also to create common knowledge of defection when it happens and rub their noses in it to make it clear that those who work to make the pie smaller will get less pie. Sometimes it’s more rewarding and higher leverage to run off and gain some momentum by creating and then expanding a small bubble where things actually *work*, but there’s no reason to go from “I can’t yet be effective in the broader community because I can’t yet break out of their ‘cog’ mold for me, so I’m going to focus on the smaller community where I can” to “fuck them all”. There’s still plenty of value in reengaging when capable and pretending there isn’t isn’t that good functional thing we’re striving to do. It’s not like we can *actually* form a bubble and reject the outside world, because the outside world will still bring you pandemics and AI, and from even a selfish perspective there’s plenty of incentive to help things go well for everyone.
You could probably write the same answer without the snark. Your study on placebo only mentions it working on IBS patients so its not the grand dismissal of placebo that you claim it is, but even if it was there are still plenty of similar phenomena. The easiest to adapt would be the nocebo-effect, just switch the positives with the negatives in the example and you have your nocebo argument.
There’s no snark in my comment, and I am entirely sincere. I don’t think you’re going to get a good understanding of this subject without becoming more skeptical of the conclusions you’ve already come to and becoming more curious about how things might be different than you think. It simply raises the barrier to communication high enough so as to make reaching agreement not worthwhile. If that’s not a perspective you can entertain and reason about, then I don’t think there’s much point in continuing this conversation.
If you can find another way to convey the same message that would be more acceptable to you, let me know.
I would favor a conversation where we keep attacks on the persons to an absolute minimum and focus instead on the arguments being made (addressing the person is sometimes necessary, but entirely ignoring the argument in favor of attempting to psychoanalyze a stranger on the internet is not a good way to have a philosophical discussion). Secondly I would also like to hear a counterargument to the argument I made. And thirdly I have never deleted a comment, but you appear to have double posted, shall I delete one of them?
It’s not an attack, and I would recommend not taking it as one. People make that mistake all the time, and there’s no shame in that. Heck, maybe I’m even wrong and what I’m perceiving as an error actually isn’t faulty. Learning from mistakes (if it turns out to be one) is how we get stronger.
I try to avoid making that mistake, but if you feel like I’m erring, I would rather you be comfortable pointing out what you see instead of fearing that I will take it as an attack. Conversations (philosophical and otherwise) work much more efficiently this way.
I’m sorry if it hasn’t been sufficiently clear that I’m friendly and not attacking you. I tried to make it clear by phrasing things carefully and using a smiley face, but if you can think of anything else I can do to make it clearer, let me know.
Which one? The “it was only studying IBS” one was only studying IBS, sure. It still shows that you can do placebos without deception in the cases they studied. It’s always going to be “in the cases they’ve studied” and it’s always conceivable that if you only knew to find the right use of placebos to test, you’ll find one where it doesn’t work. However, when placebos work without deception in every case you’ve tested, the default hypothesis is no longer “well, they require deception in every case except these two weird cases that I happen to have checked”. The default hypothesis should now be “maybe they just don’t require deception at all, and if they do maybe it’s much more rare than I thought”.
I’m not sure what point the existence of nocebo makes for you, but the same principles apply there too. I’ve gotten a guy to punch a cactus right after he told me “don’t make me punch the cactus” simply by making him expect that if I told him to do it he would. Simply replace “because drugs” with “because of the way your mind works” and you can do all the same things and more.
I’m not sure how many more times I’ll be willing to address things like this though. I’m willing to move on to further detail of how this stuff works, or to address counterarguments that I hadn’t considered and are therefore surprisingly strong, but if you still just don’t buy into the general idea as worth exploring then I can agree to disagree.
Yeah, it didn’t submit properly the first time and then didn’t seem to be working the second time so it ended up posting two by the time I finally got confirmation that it worked. I’d have deleted one if I could have.
Speaking of deleting things, what happened to your other post?
Attack is just the way in which ‘verbal arguments against X’ are often shortened to, but while it is the common way of phrasing such a thing, I agree that it is stylistically odd. I didn’t assume you had any malice in mind, I was just using it the common way but will refrain from doing so (in similar context) in the future.
Alright no problem, things like that happen all the time so I will just delete it. I described what happend to the other post here. This was one of the difficult cases where I had to balance my desire to have a record of the things (and mistakes) people (including me) said and not wanting to clog the website with low-quality (as the downvotes indicated) content (I think I found a good solution). I’m having the same dilemma right now where my genuine comments are getting voted into the negative and I’m starting to feel really bad for trying to satisfy my own personal curiosity at the expense of eating up peoples time with content they think is low quality (yes yes, I know that that doesn’t mean it is low quality per se, but it is a close enough heuristic that I’m mostly willing to stick to it). But the downvotes are very clear so while I’m disappointed that we couldn’t talk through this issue, I will no longer be eating up peoples time.
Thanks, I hadn’t seen the edit.
The only comments of yours that I see downvoted into the negative are the two prior conversations in this thread. Were there others that are now positive again?
While I generally support the idea that it’s better to stop posting than to continue to post things which will predictably be negative karma sum, I don’t think that’s necessary here. There’s plenty of room on LW for things other than curated posts sharing novel insights, and I think working through one’s own curiosity can be good not just for the individual in question, but any other lurkers who might have the same curiosities and for the community, as bringing people up to speed is an important part of helping them learn to interact best with the community.
I think the down votes are about something else which is a lot more easily fixable. While I’m sure they were genuine, some of your comments strike me as not particularly charitable. In order to hold a productive conversation, people have to be able to build from a common understanding. The more work you put in to understanding where the other person is coming from and how it can be a coherent and reasonable stance to hold, the less effort it takes for them to communicate something that is understood. At some point, if you don’t put enough effort in you start to miss valid points which would have been easy for you to find and would be prohibitively difficult to word in a way that you wouldn’t miss.
As an example, you responded to Richard_Kenneway as if he thought you were lying despite the fact that he explicitly stated that he was not imputing any dishonesty. I’m not sure where you simply missed that part or whether you don’t believe him, but either way it is very hard to have a conversation with someone that doesn’t engage with points like this at least enough to say why they aren’t convinced. I think, with a little more effort put into understanding how your interlocutors might be making reasonable, charitable, and valid points, you will be able to avoid the down votes in the future. That’s not to say that you have to believe that they’re being reasonable/charitable/etc, or that you have to act like you do, but it’s nice to at least put in some real effort to check and give them a chance to show when they are. Because the tendency for people to fail on the side of “insufficiently charitable” is really really strong, and even when the uncharitable view is the correct one (not that common on LW), the best way to show it is often to be charitable and have it visibly not fit.
It’s a very common problem that comes up in conversation, especially when pushing into new territory. I wouldn’t sweat it.
The examples that people have given are real ones. Yours are fictional. It’s easy to make up stories of how the world would look, conditional upon any proposition whatever being true. (V cerqvpg gung ng yrnfg bar ernqre jvyy vafgnagyl erfcbaq gb guvf pynvz ol znxvat hc n fgbel va juvpu vg vf snyfr.) In this light, the “least convenient possible world” for one’s interlocutors is the most convenient possible for oneself, the one in which the point at issue is imagined to be true.
We assume it’s true, we don’t have any evidence. I could tell stories about my personal experience but you’d have no way to check them. At least saying upfront that it’s a thought experiment is keeping the debate ground neutral and allows peoples reasoning to do the work instead of their emotions. And no I would never make up a story to defend my argument, the fact that you would assume your interlocutor is being a liar without any evidence to back that up is really hampering my desire to debate you.
You made up six stories here. I was not imputing any dishonesty, only pointing out that they are fiction.
OTOH, you just said of the other stories presented here that “we don’t have any evidence”. The stories I was referring to are jimmy’s story of preventing swelling in an injured joint, and his account of Conor McGregor. These stories purport to be of real things that happened. To say that his account is no evidence of that looks very like what you took me to be doing.
I was referring to that block of text that you have encoded, I decoded it and there you state the assumption that your interlocutor will lie. And no I am assuming they are true which is why I said “we assume it’s true”. I would also keep anecdotal evidence to a minimum in this type of discussion because I would want my interlocutor to be able to check every step of my reasoning. And anecdotal evidence for a positive occurrence of phenomenon does not discount the existence of a negative occurrence. I say there exists such a thing as X and the counterargument is but this one time there was Y. Do you have any arguments as to why my counterarguments or something in a similar vein couldn’t happend?
[EDIT] Richard says he meant the encoded text to only mean that the reader thinks up, but doesn’t present the false story. This is a plausible interpretation of the text and since I can’t know which one was meant I will assume it was the more charitable one and retract these comments.
As before, I was not imputing any dishonesty to the hypothetical reader reflexively thinking up a hypothetical counterexample to a generalisation.