regarding the third point, my interpretation of this part was very different: “I don’t have this for any other human flaw—people with terrible communication skills, traumatized people who lash out, anxious, needy people who will try to soak the life out of you, furious dox-prone people on the internet—I believe there’s an empathic route forward. Not so with frame control.”
I read is as “I’m not very vulnerable to those types of wrongness, that all have the same absolute value in some linear space, but I’m vulnerable to frame control, and believe the nuclear option is justified and people should feel OK while using it”.
I, personally, not especially vulnerable to frame control. my reaction to the examples are in the form of “there is a lot to unpack here, but let’s just burn the whole suitcase”. they struck me as manipulative, and done with Badwill. as such, they set alarm in my mind, and in such cases, this alarm neutralize 90% of the harm.
my theory regarding things like that, all the cluster of hard-to-pinpoint manipulations, is that understanding it is power. i read a lot and now i tend to recognize such things. as such, I’m not especially vulnerable to that, and don’t have the burn-it-with-fire reaction. i have more of a “agh, this person, it’s impossible to talk to them” reaction. I find dox-prone, needy, lash-out people much more problematic to deal with.
i have zero personal knowledge of the writer, but the feeling i get from the post is that she will agree with me. she will tell me that if I can be around frame controller and not being harmed is OK, and if can’t be around needy person it’s OK. I will avoid the needy one, and she the frame-controller. I less sure she will agree with me about the way different people can tolerate different vectors of badness different, and allowing one kind force everyone vulnerable to it be harmed or avoid the place.
but the general feeling i got is not “writer is good at spotting and we should burn it with fire” and more “you should listen to the part of you that telling you that SOMETHING IS WRONG, and it’s legitimate to take it seriously and act on it”. and it promote culture that acknowledge that as legitimate, allow such person to avoid other persons, not trying to guilt-trip them or surprise them with the frame-controller presence or do other unfriendly things people do sometimes.
as in, I didn’t see burn-with-fire-frame-controllers promoted as community strategy, but as personal strategy. personal strategy that now may encounter active resistance from the community, and should not encounter such resistance.
Jasnah Kholin
i think we have very different models of things, so i will try to clarify mine. my best bubble site example is not in English, so i will give another one—the emotional Labor thread in MetaFilter, and MetaFilter as whole. just look on the sheer LENGTH of this page!
https://www.metafilter.com/151267/Wheres-My-Cut-On-Unpaid-Emotional-Labor
there are much more then 3 comments from person there.
from my point of view, this rule create hard ceiling that forbid the best discussions to have. because the best discussions are creative back-and-forth. my best discussions with friends are - one share model, one ask questions, or share different model, or share experience, the other react, etc. for way more then three comments. more like 30 comments. it’s dialog. and there are lot of unproductive examples for that in LW. and it’s quite possible (as in, i assign to it probability of 0.9) that in first-order effects, it will cut out unproductive discussions and will be positive.
but i find rules that prevent the best things from happening as bad in some way that i can’t explain clearly. something like, I’m here to try to go higher. if it’s impossible, then why bother?
I also think it’s VERY restrictive rule. i wrote more then three comments here, and you are the first one to answer me. like, i’m just right now taking part in counter-example to “would find it surprising if you needed more than 3 comments per day to share examples, personal experiences, intuitions and relations.”
i shared my opinions on very different and unrelated parts of this conversation here. this is my six comment. and i feel i reacted very low-heat. the idea i should avoid or conserve those comments to have only three make me want to avoid comment on LW altogether. the message i get from this rule is like… is like i assumed guilty of thing i literately never do, and so have very restricted rules placed on me, and it’s very unfriendly in a way that i find hard to describe.
like, 90% of the activity this rule will restrict is legitimate, good comments. this is awful false positive ratio. even if you don’t count the you-are-bad-and-unwelcome effect i feel from it and you, apparently, not.
i find the fact that you see comments as criticism, and not expanding and continuing the building, is indicative of what i see as problematic. good comments should most of the time not be critisim. be part of the building.
the dynamic that is good in my eyes, is one when comments are making the post better not by criticize it, but by sharing examples, personal experiences, intuitions, and the relations of those with the post.
counting all comments as prune instead of bubble disincentivize bubble-comments. this is what you want?
(3) i didn’t watch the movie, nor i plan to watch it, but i read the plot summary in Wikipedia. and I see it as caution against escalation. the people there consistently believe that you should revenge on 1 point offense at 4 points punishment. and this create escalation cycle.
while i think most of Duncan’s writing is good, the thing when i think he consistently create bad situations, is in unproportional escalations of conflict, and inability to just let things be.
once upon a time if i saw someone did something 1 point bad and someone reacting in 3 point bad thing, i would think the first one is 90% of the problem. with time, i find robustness more and more important, and now i see the second one more problematic. as such. i disagree with your description of the movie.
the plot is one people doing something bad, other refuse to punish him, and a lot of people that escalate things, and so, by my standards, doing bad things. LOT of bad things. to call it a chin reaction is to not assign the people that doing bad unproportional escalating things agency over their bad choices. it’s strange for me, as i see this agency very clearly.
So this is the fourth time I am trying to write a comment. This comment is far from ideal, but I feel like I did the best as my current skill in writing in English and understanding such situations allow.
1. I find 90% of the practical problems to be Drama. as in, long, repetitive, useless arguments. if it was facebook and Duncan blocked Said, and then proceeded to block anyone that was too much out of line by Duncan-standards, it would have solved 90% of Duncan-related problems. if he would have given up already on making LW his kind of garden, it would have solved another 9%.
2. In my ideal Garden, Said would have been banned long ago. but it is my believe (and i have like five posts waiting to be written to explain my framework and evidence on that, if i will actually write them) that LW will never be something even close to my or Duncan’s Garden (there is 80%-90% similarity in our definitions of garden, by my model of Duncan).
In this LessWrong, he may remain and not be blocked. It is also good that more people will ignore his comments that predictably start a useless argument. aka—if i will write something about introspective, i expect Said comment to be useless. a also expect most third and after comments in thread to be useless.
In better LW, those net-negative comments would have been ignored, downvoted, and maybe deleted by mods. while the good ones upvoted and got reactions.
3. Duncan, I will be sad if you leave LW. I really enjoy and learn from your posts. I also believe LW will never be your Garden. i would like you to just give up already on changing LW, but still remain here and write. I wish you could have just… care less about comments, assume that 90% of what is important in LW is posts, not comments. Ignore most comments, answer only to those that you deem good and written in Goodwill. LessWrong is not YOUR version of Garden, and will never be. but it has good sides, and you (hopefully) can choose to enjoy the good parts and ignore the bad ones. while now it looks to me like you are optimized toward trying things you object to and engage with them, in hope to change LW to be more to your standards.
interesting! now i think about how my own version of this post will look like—and what those differences tell me about myself. i think that if different people will write their own versions (i count Duncan’s rules of discussion as his own version, despite the different format) it will give interesting information about how people are different, and how to pass their ITT. i may try to write my own version of such post as an exercise in “know thyself”.
this is extremely good post. it example and illustrate the sort of mental-moves i believe is needed for rational thinking, of the variety of “know thyself”. those things are even harder then normal to communicate, and i find this post manage to do that, and manage to give me useful information, and give me example of how such introspection can happen. i really impressed!
Interesting! reading this post make me realize I have somehow opposite opinion. the people I respect are often the people that are good at untangling big-scary-questions, so they will not be like that. It’s very much Bucket Errors—If i will think on X I will have to do uncomfortable thing Y. so the mental move that helped me was to untangle.
for example, when i thought about the possibility of break up i was practically panicking. it was very irrational, disentangles from the territory emotion—the break up itself was swift and easy and I’m pretty sure i should have done in sooner except i still have no idea when.
but the mental move that let me to think about that was to say to myself that I DON’T HAVE TO BREAK UP. now, it’s not exactly like that. i told myself we can stay together for a year. and then it was extremely clear i want to plan for this break up. and then during something like one week break up become the only possible option.
in the same way, I didn’t break up by having uncomfortable conversation. I just… didn’t. it’ harder to describe, but there are people that i can have emotionally vulnerable and deep conversations and people i don’t. and the right move is not to have the conversations with people that it hard to have them. but to have connections with those with whom it’s not hard to have those conversations.
for this move to work it have to be honest. for example, I’m staying at my job despite the real possibility i will be able to earn more in other place because it’s comfortable and moving jobs is very high cost emotionally to me. i did told myself year ago that if they don’t give me the promotion they promised i leave (and i believe this is why i actually got it), but I’m still here. and I’m not sure your framing will see that as the right choice, despite the fact i did stare into the abyss and precommited to search for different job if i don’t get the promotion.
there is two things here, to acknowledge something, and to change it. and you sorta conflating them here. for example, there are ultra-orthodox people here (Haredim) with some cult-like live. and there was forums (and i assume there are facebook groups) for Haredim-against-their-will. people who stare into the abyss, decided religion is a lie, and then decided it’s not worth to losing all their family and friends and work place, and it’s better to pretend.
there is to see something, and there is to act on it, and it’s two different things. and your framing is too much on the side of forcing yourself to do something as the only option, when I see forcing yourself to do things as form of self harm (like in Forcing yourself to keep your identity small is self-harm), and prefer ways that does not include forcing yourself, and that I don’t see in your map (but see in the territory).
also, I noticed now that I wrote a lot about where I disagree, and it’s misleading. I VERY MUCH agreeing that do the hard thing is very important life skill. I just prefer to un-abyss the abyss before you stare at it.
One of the things that I searched for in EA and didn’t find, but think should exist: algorithm, or algorithms, to decide how much to donate, as a personal-negotiation thing.
There is Scott Alexander’s post about 10% as Schelling point and way to placate anxiety, there is the Giving What You Can calculation. but both have nothing with personal values.
I want an algorithm that is about introspection—about not smashing your altruistic and utilitarian parts, but not other parts too, about finding what number is the right number for me, by my own Utility Function.
and I just… didn’t find those discussions.
in dath ilan, when people expect to be able to name a price for everything more or less, and did extensive training to have the same answer to the questions ‘how much would you pay to get this extra’ and ‘how much additional payment would you forgo to get this extra’ and ‘how much would you pay to avoid losing this’ and ‘how much additional payment would you demand if you were losing this.’, there are answers.
What is the EA analog? how much I’m willing to pay if my parents will never learn about that? If I could press a button and get 1% more taxes that would have gone to top Giving Well charities but without all second order effects except the money, what number would I choose? What if negative numbers were allowed? what about the creation of a city with rules of its own, that take taxes for EA cause—how much i would accept then?
where are the “how to figure out how much money you want to donate in a Lawful way?” exercises?
Or maybe it’s because far too many people prefer and try to have their thinking, logical part win internal battle against other, more egotistical ones?
Where are all the posts about “how to find out what you really care about in a Lawful way”? The closest I came about is Internal Double Crux and Multi-agent Model of the soul and all its versions. But where are my numbers?
somewhere (i can’t find it now) some else wrote that if he will do that, Said always can say it’s not exactly what he means.
In this case, i find the comment itself not very insulting—the insult is in the general absent of Goodwill between Said and Duncan, and in the refuse to do interpretive labor. so any comment of “my model of you was <model> and now i just confused” could have worked.
my model of Duncan avoided to post it here from the general problems in LW, but i wasn’t surprised it was specific problem. I have no idea what was Said’s model of Duncan. but, i will try, with the caveat that the Said’s model of Duncan suggested is almost certainly not true :
I though that you avoid putting it in LW because there will be strong and wrong pushback here against the concept of imaginary injury. it seem coherent with the crux of the post. now, when I learn the true, i simply confused. in my model, what you want to avoid is exactly the imaginary injury described in the post, and i can’t form coherent model of you.
i suspect Said would have say i don’t pass his ideological Turning test on that, or continue to say it’s not exact. I submit that if i cannot, it’s not writing not-insultingly, but passing his ideological turning test.
I actually DO believe you can’t write this in not-insulting way. I find it the result of not prioritizing developing and practicing those skills in general.
while i do judge you for this, i judge you for this one time, on the meta-level, instead of judging any instance separately. as i find this behavior orderly and predictable.
“This puts a new spin on the increasing tendency of employees to change employers and even careers. Rather than a sign of disloyalty or fickleness, it’s just the natural result of an economy efficiently incentivizing and engaging in valuable information exchange”
this is very interesting idea! sadly, i have no idea how to check it.
״And how much is it actually mind-killing in the first place?״
a lot. as in—dumber then 7 years old kid.
i remember this time i said to smart women, with PhD, that good intentions lead to hell. and then she said that i said that I’m in hell because of her. this was ridiculous failure of reading comprehension. after that i started to notice such instances.
my country have major political battle now, and i wrote to woman on facebook that i talked to in the past, and she sounded less human that chatGPT. i have someone else, who i know in real live, behave very stupidly and uncharitably, because i didn’t support some argument-solider, despite the fact i actually agreed with his position.
the mind-killing effect is STRONG.
i didn’t intend to comment .but then i read comment about fighting negativity bias and decided the commenter right, so, I’m doing it to—this new feature is really good, i encountered it in the wild, find it intuitive (except the sides of the votes, but when i doing it wrong the colors clarify it and i fix that immediately), and basically very good and useful feature. in my model of the world, 70%+ of users like this feature, and don’t say that, so the result is the comment section below.
i also find it much better then Duncan’s suggestion below, for reasons about Propagating Beliefs Into Aesthetics, and LessWrong aesthetics being very clearly against attention-grabbing things that Out To Get You and signaling undue overconfidence as Overconfidence Is Deceit, and Duncan’s suggestion undermine this.
It’s very interesting to read that, because i had the exactly opposite reaction:
What if I got irrefutable proof that [my belief X] contradicts evidence? I’d NOT lose all my friends believing X. what’s wrong with them, that their friendships depend on believing X?
my beliefs are idiosyncratic enough that i never met a person that i don’t disagree with on something substantial. and yet, i have friends. maybe it’s because i didn’t invest lot of effort to create groups around believes?
now i wonder how much i typical-mind other people in regard to that question, because i expect that most people will not lose all their friends over that. especially not “real” friends.
i feel there is some way i still failing on ITT here, but i can’t grasp exactly where.
This is very interesting comment, about book that I just added to my reading list. would you consider posting this as separate post? I have some thoughts about masking and Authenticity, and the price of it and the price of too much of it, and I believe it’s discussion worth having, but not here.
(I believe some people will indeed benefit a lot from not working as a new parents, but for others, it will be very big hit to their self-worth, as they define themselves by work, and better to be done only after some introspection and creating foundation of self-value disconnected from work.)
i find it interesting, and something i want one of my friends especially to read. i also liked a lot the ACTUAL EXAMPLES, that was helpful. i will not use (At least, i’m not planing to use) picture-window-framework metaphor myself.
so… maybe in the future don’t write long posts that take a lot of time just because two people pressure you to do that? you have n=1 it will not worth it.
“I do generally wish Duncan did more of this and less trying to set-the-record straight in ways that escalate in IMO very costly ways”
strongly agree.
there is one main problem with this argument, and this that people who want to cross Fence aren’t safe on their current position.
for example, high-commitment communities is “safe” social default, one very old that survived from before we were humans. but, as Ozy wrote, “One of the most depressing facts about high-commitment communities is that they almost all cover up child sexual abuse.”
this is the safety of the Fence. this “safety” sucks.
the sister that went no-contact with her rapist father is the black sheep of the family. she is the radical, the revolutionist. all her family think she is bad daughter and she should not deny her father his granddaughter. her sister, who send her little boy unsupervised to his grandfather, even after he start to wetting himself again—she is the conservative, who respect the status quo.
i want to be the black-sheep sister. i can’t see the other option as anything but abomination.
***
different argument: what is the fence? because if you ask me, cheating in unhappy marriage IS the fence, the conservative view. the unconservative view is you can just divorce. very new, was definitely not like that during most of the history. while constant cheating, sometimes with “self-respecting woman have husband and lover” as folk-wisdom idiom, was the norm in some times and places.
so how can you be respectful of the fence, with you don’t know what side is the conservative one?
(it’s like what Duncan said, but from different angle)
so I read in Rational Spaces for almost a decade, and almost never commented. when i did commented, it was in places that i consider Second Foundation. your effort to make Less Wrong is basically the only reason I even tried to comment here, because i basically accepted that Less Wrong comments are to adversarial for safe and worthwhile discussion.
In my experience—and the Internet provide with a lot of places with different discussion norms—collaboration is the main prediction of useful and insightful discussion. I really like those Rational Spaces when there is real collaboration on truth-seeking. I find a lot of interesting ideas in blogs where comments are not collaborative but adversarial and combative, and I sometimes found interesting comments, but i almost never found interesting discussion. I did, however find a lot of potentially-insightful discussions when the absent of good will and trust and collaboration and charity ruined perfectly good discussion. sometimes it was people deliberately pretend to not understand what people said, and instead attacking strawman. sometimes (especially around politics) people failed to understand what people say and was unable to hear anything but the strawman-version of an argument. a lot of times people was to busy trying to win an argument so they didn’t listen to what the other side actually trying to convey. trying to find weak part of the argument to attack instead of trying to understand vague concept in thingspace that a person is trying to gesture to.
the winning a argument mode is almost never produced new insights, while sharing experiences and exploring together and not trying to prove anything is the fertile ground of discussion.
All the rules in this list are rules I agree to. more then half will facilitate this type of environment. and other things you wrote that I read make me believe you find this find of collaborative spirit important. but this is my way of seeing the world, in which this concept of Good Will is really important, and more then half of this rules look like ways to implement in practice this concept. and I’m not sure this is the way you think about those things, or we see the same elements of the territory and map them differently.
if i was writing those rules, i would have started with “don’t be irrationally, needlessly adversarial, to wrongly fulfill your emotional needs, for example: [rules 2,3,5, 6,7,8,9,10]”
but there is enough difference that i suspect there is other concept, near my Good Will concept but different from it, around which those rules cluster, that i don’t entirely grasp.
can you help me understand if such a concept exist, and if yes, point to some posts that may help me understand it?