this post was almost-useless for me—i learn from it much less then from any post for the sequences. what i did learn: how over-generalization look like. that someone think that other people learn rationality skills in a way that i never saw anyone learn from, with totally different language and way of thinking about that. that translating is important.
the way i see it is: people look on the world with different lens. my rationality skills are the lens that are instinctive to me and include in the rationality-skills subset.
i learned them mostly be seeing examples and creating a category for it.
all those exercises not only didn’t work for me, i have much less idea what Yudkowsky tried to teach, while from the sequences i did manged to learn some things.
maybe the core rationality skill is the ability to bridge the gap between theory and practice? i consider “go one meta level higher” the most important one. it creates important feedback loop.
also, in most situations i consider going level higher—give category and not example—good idea.
i actually learned that examples are really good thing and that is the natural way humans learn. i think it’s part of the things the post tried to say, but i’m not sure. this is one of the least understandable post of Yudkowsky i ever read.
Jasnah Kholin
is it? i find it very Christian way of thinking, and this though pattern seem obviously wrong to me. it’s incorporated into the Western Culture, but i live in non-Christian place. you can believe in Heaven to all! some new-age people believe in that. you can believe in Heaven to all expect the especially blameworthy—this is how i understand Mormonism.
thanks for the insight! now i can recognize one pretty toxic thought-pattern as Christian influence, and understand it better!
it was strange to read it. it was interesting—explaining point i already know in succinct and effective way. and it’s connect nicely with the extensive discussion on consent and boundaries. Boundaries: Your Yes Means Nothing if You Can’t Say No.
and then, when i was reading the comments and still internalizing the post i got it—i actually re-invented this concept myself! it could have been so nice not to have to do it… i wrote my own post about it—in Hebrew. it’s name translates to Admit that sometimes the answer is “yes”, and it start with a story about woman that claimed to believe in personal optimization of diet my experiments on yourself, but then find a reason no invalidate every result that contradicted her own believes about optimal diet. it took me years to notice the pattern.
and then, this comment about budgeting and negotiating with yourself that empathized how important it is to allow the answer to be “yes”:
”I’m seeing a lot of people recommend stopping before making small or impulse purchases and asking yourself if you really, really want the thing. That’s not bad advice, but it only works if the answer is allowed to be ‘yes.’ If you start by assuming that you can’t possibly want the thing in your heart of hearts, or that there’s something wrong with you if you do, it’s just another kind of self-shaming. “
it’s kind of like 5, but from the point of view of different paradigm.
and of course, If we can’t lie to others, we will lie to ourselves.
it’s all related to the same concept. but i find the different angels useful.
i find it interesting, and something i want one of my friends especially to read. i also liked a lot the ACTUAL EXAMPLES, that was helpful. i will not use (At least, i’m not planing to use) picture-window-framework metaphor myself.
so… maybe in the future don’t write long posts that take a lot of time just because two people pressure you to do that? you have n=1 it will not worth it.
I know very little about other sorts of charity work, but i heard social workers complaining about burnout a lot.
I tend to assume that encounter harsh reality s hard, and working in unappreciated work that lack resources is hard.
It may be interesting to see what is the baseline burnout level in some fields is, to look both on variation and to how similar or dissimilar EA to other charities is. It may help to understand who big part different elements play in burnout—true values alignment, Heroic Responsibility, encountering discouraging reality, other things (like simply too many working hours).
The way i see it, something wrong with people EA attract and some problem with EA are complimentary hypotheses. dysfunctional workplaces tend to filter for people that accept those dysfunctionalities.
This is very interesting comment, about book that I just added to my reading list. would you consider posting this as separate post? I have some thoughts about masking and Authenticity, and the price of it and the price of too much of it, and I believe it’s discussion worth having, but not here.
(I believe some people will indeed benefit a lot from not working as a new parents, but for others, it will be very big hit to their self-worth, as they define themselves by work, and better to be done only after some introspection and creating foundation of self-value disconnected from work.)
so I read in Rational Spaces for almost a decade, and almost never commented. when i did commented, it was in places that i consider Second Foundation. your effort to make Less Wrong is basically the only reason I even tried to comment here, because i basically accepted that Less Wrong comments are to adversarial for safe and worthwhile discussion.
In my experience—and the Internet provide with a lot of places with different discussion norms—collaboration is the main prediction of useful and insightful discussion. I really like those Rational Spaces when there is real collaboration on truth-seeking. I find a lot of interesting ideas in blogs where comments are not collaborative but adversarial and combative, and I sometimes found interesting comments, but i almost never found interesting discussion. I did, however find a lot of potentially-insightful discussions when the absent of good will and trust and collaboration and charity ruined perfectly good discussion. sometimes it was people deliberately pretend to not understand what people said, and instead attacking strawman. sometimes (especially around politics) people failed to understand what people say and was unable to hear anything but the strawman-version of an argument. a lot of times people was to busy trying to win an argument so they didn’t listen to what the other side actually trying to convey. trying to find weak part of the argument to attack instead of trying to understand vague concept in thingspace that a person is trying to gesture to.
the winning a argument mode is almost never produced new insights, while sharing experiences and exploring together and not trying to prove anything is the fertile ground of discussion.
All the rules in this list are rules I agree to. more then half will facilitate this type of environment. and other things you wrote that I read make me believe you find this find of collaborative spirit important. but this is my way of seeing the world, in which this concept of Good Will is really important, and more then half of this rules look like ways to implement in practice this concept. and I’m not sure this is the way you think about those things, or we see the same elements of the territory and map them differently.
if i was writing those rules, i would have started with “don’t be irrationally, needlessly adversarial, to wrongly fulfill your emotional needs, for example: [rules 2,3,5, 6,7,8,9,10]”
but there is enough difference that i suspect there is other concept, near my Good Will concept but different from it, around which those rules cluster, that i don’t entirely grasp.
can you help me understand if such a concept exist, and if yes, point to some posts that may help me understand it?
It’s very interesting to read that, because i had the exactly opposite reaction:
What if I got irrefutable proof that [my belief X] contradicts evidence? I’d NOT lose all my friends believing X. what’s wrong with them, that their friendships depend on believing X?
my beliefs are idiosyncratic enough that i never met a person that i don’t disagree with on something substantial. and yet, i have friends. maybe it’s because i didn’t invest lot of effort to create groups around believes?
now i wonder how much i typical-mind other people in regard to that question, because i expect that most people will not lose all their friends over that. especially not “real” friends.
i feel there is some way i still failing on ITT here, but i can’t grasp exactly where.
there are posts with good titles, when i understand the concept from the title and expect the post to elaborate. i find this post in the links in Pain is Not The Unit of Effort, and i though i knew what i will see. i was wrong.
i expected to read post with examples about the places when pain actually pay off. pain is orthogonal to success, and it’s mean that sometimes you will need pain to success, and i expected to see list of such examples. only part 2 was example of that. part 1 and 3 was examples of things that are not pain, and part 4 just left me bewildered. part 5 sounds like counter-argument to me. and the Antidotes sounds like counter-arguments too. they look to me like examples of dysfunctionality, exactly the attitude that the original post come against.
i will address part 4 specifically as this is the part i find most strange and confusing.
Ye Xiu strategy sounds to me clearly inferior, like signal you will always cooperate on the Prisoner Dilemma—you basically incentivize people to defect against you. why it’s a good thing?
”That wasn’t a real disaster.” sounds like True Scotsman, moreover, in defining disaster as “You simply die” you make this word useless. categories exist to point on cluster of things. “everything” and “the empty set” are both useless categories. why would you want to take useful word and render it useless? <very bad things that worth to guard against> sounds like good category to me.
if you recover in less the a minute from startup fail, you don’t sound surprised enough for the disparity between your map and territory. emotions serve purposes—like making you try hard to avoid this outcome. and not “hard” as throwing willpower on it, but “hard” as dedicating per-planing and perception and all your ability to think. if you don’t know your start up will fail, you should be surprised, if you do, you should do something to prevent it. also, Chesterton’s Fence. human emotions exist for a reason, and i deeply suspect ideologies that glorified emotionless. it’s like throwing away really useful tool that was optimized by the blind goddess of evolution. you sure you can do better? really sure?
you say “I’m still in the game.” and i think about the time i understood the problem with the social script of “don’t give up”. sometimes, it’s bad to stay. WHY do you think it’s good to stay? why you judge the one leaving the startup world bad thing and you remaining good thing? what are the criteria of the judgment?
part of my problem with glorify-pain culture is it anti-reflection, it’s all “go forward in full force” and not ’let’s stop and evaluate the options and see what option is best”.
you give examples but not reasons it was worth it, or that it was even good thing. and i have the feeling there are unsaid something this post try to reflect, but i can’t imagine what person will be persuaded by post like that, what kind of algorithm can create that post. ITT total failure on my part.
maybe i should write the post i expected to find here. the problem? i don’t have enough real-world examples for that.
interesting! now i think about how my own version of this post will look like—and what those differences tell me about myself. i think that if different people will write their own versions (i count Duncan’s rules of discussion as his own version, despite the different format) it will give interesting information about how people are different, and how to pass their ITT. i may try to write my own version of such post as an exercise in “know thyself”.
i didn’t intend to comment .but then i read comment about fighting negativity bias and decided the commenter right, so, I’m doing it to—this new feature is really good, i encountered it in the wild, find it intuitive (except the sides of the votes, but when i doing it wrong the colors clarify it and i fix that immediately), and basically very good and useful feature. in my model of the world, 70%+ of users like this feature, and don’t say that, so the result is the comment section below.
i also find it much better then Duncan’s suggestion below, for reasons about Propagating Beliefs Into Aesthetics, and LessWrong aesthetics being very clearly against attention-grabbing things that Out To Get You and signaling undue overconfidence as Overconfidence Is Deceit, and Duncan’s suggestion undermine this.
i may be should (and probably will not) write my own post about Goodwill. instead i will say in comment what Goodwill is about, by my definition.
Goodwill, the way i see it, on the emotional level, is basically respect and cooperation. when someone make an argument, do you try to see to what area in ConceptSpace they are trying to gesturing, and then asking clarifying questions to understand, or do you round it up to the nearest stupid position, and not even see the actual argument being made? do you even see then saying something incoherent and try to parse it, instead of proving it wrong?
the standard definition of Goodwill does not include the ways in which failure of Goodwill is failure of rationality. is failure of seeing what someone is trying to say, to understand their position and their framing.
civility is good for its own sake. but almost everyone who decided to be uncivil end up strawmanning their opponents, end up with more wrong map of the world. what may look like forgiveness from outside, for rationalist, should look from inside like remembering that we expect short inferential distances and that politics wrecks your ability to do math and your believes filter your receptions, so depends on your side in argument.
i gained my understanding of those phenomenons mostly from the Rational Blogosphere, and saw it as part of rationality. there is important difference between person executing the algorithm “being civil and forgiving”, and people executing algorithm “remember about biases and inferential distances, and try to overcome them”, that implemented by understanding the importance of cooperating even after perceived defection in noisy environment, in the prisoner’s dilemma, and by assuming that communication is hard ind miscommunications are frequent, etc.
so i thought about you comment and i understand why we think about that in different ways.
in my model of the world, there is important concept—Goodwill. there are arrows that point toward it, things that create goodwill—niceness, same side politically, personal relationship, all sort of things. there are also things that destroy goodwill, or even move it to the negative numbers.
there are arrows that come out of this Goodwill node in my casual graph. things like System1 understand what actually said, tend to react nicely to things, able to pass ITT. some things you can get other ways—people can be polite to people they hate, especially on the internet. but there are things that i saw only as result of Goodwill. and System1 correct interpretation is one of them/ maybe it’s possible—but i never saw it. and the politeness you get without Goodwill, is shallow. people’s System1 notice that in body language, and even in writing.
now, you can dial back on needless insulting and condescension. those are adversarial moves that can be chose consciously or avoided, even with effort. but from my point of view, when there is so little Goodwill left, the chance for good discussion already lost. it can only be bad and very bad. avoiding very bad is important! but my aim in such situations is to leave the discussion when the goodwill come close to zero, and have mental alarm screaming at me if i ever in the negative numbers of feel like the other person have negative numbers of Goodwill toward me.
so, basically, in my model of the world, there is ONE node, Goodwill. in the world, there is no different things. you write: “even if there’s no risk that yelling at people (or whatever) will directly cause you to straw-man them.”. but in my model, such situation is impossible! yelling at people WILL cause you to strawman them.
in my model of the world, this fact is not public knowledge, and my model regarding that is important part of what i want to communicate when I’m talking about Goodwill.
thanks for the conversion! it’s the clearest way i ever described my concept of Goodwill, and it was useful for my to formulate that in words.
I was sazened by the word Sazen when i saw Duncan use it on facebook, and though i understood it. to my defense i say that now i believe this word does not carve reality at the joints, and that folk wisdom and what-sazen-should-mean are two different, distinct things.
i want to write not short post that explains my own map of the sazen-adjusting part of ConceptSpace, so i postpone my longer response until i will write it. my map of it all now is that you throw bunch of very different things into this one concept, that i separate to different concepts—that should be treated differently. when i unpack folk wisdom i feel like now i understand it BETTER—but my core understanding remain the same. if someone will tell me Duncan is writer and teacher (And not Second Foundation Rationalist—which is how i think about you) i will suspect it unfriendly attempt at deception—or more likely, stupid joke, that play exactly on the fact this description is the sort that described in folk wisdom as “half true—whole lie”
folk wisdom, in my experience, is much more similar to the lossy compression picture then the sazen one—when i gained understanding, i feel like the fold wisdom pointer point exactly in the right direction, and what was missed is the emotional understanding. the picture representing it will be black-white version of the same picture. (and i don’t call it lossy compression, nor i find this concept useful). it’s different from the sazen as the picture that contains some distinct features that let you recognize it if you know what someone is talking about.
but i don’t want to start this discussion now—it’s better that i will write my own post first.
״And how much is it actually mind-killing in the first place?״
a lot. as in—dumber then 7 years old kid.
i remember this time i said to smart women, with PhD, that good intentions lead to hell. and then she said that i said that I’m in hell because of her. this was ridiculous failure of reading comprehension. after that i started to notice such instances.
my country have major political battle now, and i wrote to woman on facebook that i talked to in the past, and she sounded less human that chatGPT. i have someone else, who i know in real live, behave very stupidly and uncharitably, because i didn’t support some argument-solider, despite the fact i actually agreed with his position.
the mind-killing effect is STRONG.
“From another perspective, if this were obvious, more people would have discovered it, and if it were easy, more people would do it, and if more people knew and acted in accordance with the below, the world would look very different.”
so, i know another person who did the same, and i tried that for some time, and i think this is interesting question i want to try to answer.
so, this other person? her name is Basmat. and it sorta worked for her. she saw she is read as contrarian and received with adversity, and people attribute to her things she didn’t said. and decided to write very long posts that explained her worldview and include what she definitely doesn’t mean. she was ruling out everything else. and she become highly respected figure in that virtual community. and… she still have people how misunderstood her. but she had much more legitimacy in shutting them up as illegitimate trolls that need not be respected or addressed.
see, a lot of her opinions where outside the Overton window. and even in internet community that dedicated for it, there was some wave-of-moderation. one that see people like her as radicals and dogmatic and bad and dangerous. and the long length… it changed the dynamic. but it mostly was costly, and as such trustworthy, signal, she is not dogmatic. that she can be reasoned with. this is one of my explanations for that.
but random people still misunderstood her, in exactly the same ways she ruled out! it was the members of the community, who know her, that stopped to do that. random guests—no.
why? my theory us there are things that language designed to make hard to express. the landscape is such to make easy to misunderstand or misrepresent certain opinions, in Badwill, to sound much worst then they are.
and this related to my experience. which is—most people don’t want to communicate in Goodwill. they don’t try to understand what I’m trying to point at. they try to round my position to the most stawmanish one they reasonably can, and then attack it.
i can explain lengthly what i mean, and this will make it worst, as i give them more words that can be misrepresented.
and what i learned is to be less charitable, is to filter those people out ruthlessly, because it is waste of time to engage. if i make the engagement in little pieces with opportunity for such person feedback, and ask if i was understood and if he disagree, if i make Badwill strategies hard—they will refuse to engage.
and if i clarify and explain and rule out everything else in Goodwill, they just find new and original ways to distort what i just said.
i still didn’t read the whole post, but i know my motivation such that i wrote this comment now and will not if i postpone it. but i want to say—in my experience, such strategy work ONLY in Close Garden. in Open Garden, with too many people acting in Badwill, it’s losing strategy.
( i planned to write also about length and that 80%-90% of the people will just refuse to engage with long enough text or explanation, but exhausted my writing-in-English energy for now. it is much more important factor that the dynamic i described, but i want to filter such people so i mostly ignore it. in real world though, You Have Four Words, and most people will simply refuse to listen to read you, in my experience)
edit after i read all the post:
so i was pleasantly surprised of the post. we have very similar models of the dynamics of conversions here. i have little to add beside—I agree!
this is what make the second part so bewildering—we have totally opposite reactions. but, maybe it can be solved by putting a number on it?
if i want to communicate idea that is very close to politically-charged one, 90% of people will be unable to hear it no matter how i will say that. 1% will hear no matter what. and another 9% will listen, if it is not in public, if they already know me, if they are in the right emotional space for that.
also, 30%-60% of the people will pretend they are listening to me in good faith only to make bad faith attacks and gotchas.
which is to say—i did the experiment. and my conclusion was i need to filter more. that i want to find and avoid the bad-faith actors, the sooner—the better. that in almost all cases i will not be able to have meaningful conversion.
and like, it work, sorta! if i feel extremely Slytherin and Strategic and decided my goal is to convince people or make then listen to my actual opinion, i sorta can. and they will sorta-listen. and sorta-accept. but people that can’t do the decoupling thing or just trust me—i will not have the sort of discussion i find value in. i will not be able to have Goodwill discussion. i will have Badwill discussion when i carefully avoid all the traps and as a prize get you-are-not-like-the-other-X badge. it’s totally unsatisfying, uninteresting experience.
what i learned from my experience is that work is practically always don’t worth it, and it’s actually counter-productive in a lot of times, as it make sorting Badwill actors harder.
now i prefer that people who are going to round my to the closest strawman to demonstrate it sooner, and avoid them fast, and search for the 1%.
because those numbers? i pulled them right from my ass, but they are wildly different in different places. and it depends on the local norms ( which is why i hate the way Facebook killed the forums in Hebrew—it’s destroyed Closed Gardens, and the Open Garden sucks a lot. and there are very little Closed Gardens that people are creating again). but hey can be more like 60%-40% in certain places. and certain places already filtered for people that think that long posts are good, that nuances are good. and certain places filtered for lower resolution and You Have Four Words and every discussion will end with every opinion rounded up for one of the three opinions there, because it simply have no place for better resolutions.
it’s not worth it to try to reason with such people. it’s better to find better places.
all this is very good when people try to understand you in Goodwill. it’s totally worth it then. but it not move people from Badwill to Goodwill, from Mindkilled to not. it’s can make dialog with mindkilled people sorta not-awful, if you pour in a lot of time and energy. like, much more then i can in English now. but it’s not worth it.
do you think it worth it? do you think about situations, like this with $ORGANIZATION that you have to have this dialog? i feel like we have different kinds of dialogs in mind. and we definitely have very different experiences. I’m not even sure that we are disagreeing on something, and yet, we have very similar descriptions and very different prescriptions...
****
it was very validating to read Varieties Of Argumentative Experience. because, most discussions sucks. it’s just the way things are.
I can accept that you can accidentally suck the discussion, but not move it higher on the discussion pyramid.
****
about this example - downvoted the first and third, and upvoted the second. my map say that the person that wrote it assign high probability for $ORGANIZATION being bad actor as part of complicated worldview about how humans work, and that comment didn’t make him to update this probability at all, or maybe have epsilon update.
he have actually different model. he actually think $ORGANIZATION is bad actor, and it’s good that he can share his model. do you wish for Less Wrong that you can’t share that model? do you find this model obviously wrong? i can’t believe you want people who think people are bad actors should pretend they don’t think so, but it’s failure mode i saw and highly dislike.
the second comment is highly valuable, and the ability to see and to think Bulshit like the author did is highly valuable skill that I’m learning now. i didn’t think about that. i want to have constantly-running background process that is like that commenter. Shoulder Advisor, as i believe you would have described it.
there is one main problem with this argument, and this that people who want to cross Fence aren’t safe on their current position.
for example, high-commitment communities is “safe” social default, one very old that survived from before we were humans. but, as Ozy wrote, “One of the most depressing facts about high-commitment communities is that they almost all cover up child sexual abuse.”
this is the safety of the Fence. this “safety” sucks.
the sister that went no-contact with her rapist father is the black sheep of the family. she is the radical, the revolutionist. all her family think she is bad daughter and she should not deny her father his granddaughter. her sister, who send her little boy unsupervised to his grandfather, even after he start to wetting himself again—she is the conservative, who respect the status quo.
i want to be the black-sheep sister. i can’t see the other option as anything but abomination.
***
different argument: what is the fence? because if you ask me, cheating in unhappy marriage IS the fence, the conservative view. the unconservative view is you can just divorce. very new, was definitely not like that during most of the history. while constant cheating, sometimes with “self-respecting woman have husband and lover” as folk-wisdom idiom, was the norm in some times and places.
so how can you be respectful of the fence, with you don’t know what side is the conservative one?
(it’s like what Duncan said, but from different angle)
what is cool about that post is it self-demonstrating nature. like the maze, it give explanation that give less precise map of the world, with less prediction power then more standard model. and it give more pessimistic and cynical explanation. you trade off your precision and prediction power to be cynical and pessimistic!
and now i can formalize what i didn’t like abut this branch of rationality. it’s look like cynicism is their bottom line. they already convinced in the depth of their heart that the most pessimistic theory is true, and now they are ready to bite the bitter bullet.
but from the side, i see no supporting evidence. this is not how people behave. the predictions created by such theories are wrong. it’s so strange thing to write on the bottom line! but being unpleasant not making something true, more then being pleasant make it true.
and as Wizard’s first rule say, people believe things they want to believe or things the afraid to believe...