probably not-required-reading, in that many modern community members pick up the norms without having read the original source material
If you think the Sequences were about inculcating “norms”, then you definitely need to read the original source material, which explicitly denies this! (Norms are made and unmade by other people, but the question of which computations result in accurate beliefs and effective plans is determined by the structure of reality; it’s “law” as in “laws of physics”, not “common law”.)
I disagree. Of course the Sequences are about inculcating social norms, as is almost all large-scale human communication. Norms are how humans actually communicate ideas.
An explanation: when humans think in groups, their thinking is shaped by the norms of the group. e.g: the way one would post to obtain social status and respect on 4chan is different from LessWrong 2.0.
LessWrong’s norms (i.e. explain, don’t persuade. avoid insulting people about their knowledge of source material, get curious about other people’s models, etc) have been built (both in deliberate ways by Habryka/Lightcone, and in more nebulous social ways) to make it easier for people to be better at thinking when they communicate using them.
We can also reverse this. Someone who thoroughly understands LessWrong/rationalist/Yudkowskian/etc norms will find most of the content of the sequences obvious. E.g. the norm “offer concrete predictions” is making beliefs pay rent. the norm “offer concrete models” enables crux-finding. And so on.
Yes, the Sequences say they are about giving you the tools to know which computations result in accurate beliefs. However, the average person who benefits from a specific Yudkowsky essay probably has not read that essay (claim not fully justified in this comment, it’s long enough already). To be more specific, the average person who benefits from the idea “the map is not the territory” has not read the original (Korzybski, 1931). Instead, they read/listened to someone who read/listened to… (some intermediate layers) until we get to an original source.
We call the ideas that propagate throughout a community until they become obvious to most members ‘norms’.
(epistemic status: feeling incredibly insulted. I’ve read the Sequences)
Surely not the only way. A lot of ideas can’t be communicated via norms, because norms don’t have the bandwidth. For an arbitrary example, take the relative state interpretation of quantum mechanics. You definitely need reading and math for that, not just norms.
Someone who thoroughly understands LessWrong/rationalist/Yudkowskian/etc norms will find most of the content of the sequences obvious.
avoid insulting people about their knowledge of source material
That’s a catastrophically bad norm because it degrades propagation of the source material. If someone says something misleading or incorrect about the source material, the way to promote knowledge is to correct them, but that risks insulting them. People who care about the integrity of the source material should want to be corrected (and graciously tolerate some rate of false attempted “corrections” as the price of receiving true corrections).
get curious about other people’s models
This seems like a suboptimal norm because it promotes inefficient allocation of attention. If you don’t have the capacity to be curious about everything, you have to prioritize, and if you have to prioritize, that implies being less curious about some people’s models if what you’ve heard from them so far doesn’t seem promising.
I feel like I have a thesis about how generating cultural information in the modern world involves writing essays where, if you do well, you impact more people who didn’t read the source material than did (e.g. my Korzybski point), and you are ignoring my central point.
Instead, you are aiming to persuade me of your point by using weaknesses in my analogies.
LessWrong is for learning about each other’s models, not for having an argument. We’re having an argument. I’m deliberately not engaging with your most recent points because I don’t want arguments like this on my favourite website.
Wait, now I am curious! Please tell me more about your model that Less Wrong is for learning about each other’s models, not for having an argument. That was actually not my understanding! Where did you learn that? Can you say more about why you think arguments are bad and model-sharing is good? (Maybe focus on the the former if you think the latter is too obvious to need elaboration.)
This is sociologically fascinating. I strong-upvoted your comment and will strong-upvote any more explanation you can give me.
According to me (and potentially nobody else), I view LessWrong as a place where we do argument in the truth-building/philosophical/debate sense, and not in the shouting match sense. I think there is a way where we can do argument that works, but the above was not working for me.
[I felt like the above was getting into “shouting match” territory more than “squishing our different models of the situation together and attempting to do our best to get to Aumann’s Agreement Theorem in real life. (note this is mostly because I noticed myself getting defensive in my own head—your comments may have worked perfectly well on the same words posted by someone else).]
According to me, this is good. The reason that comes to mind is “we want LessWrong to be a place of repeated idea exchange, and therefore people getting alienated is bad because then they might stop posting—model-sharing leads to much less alienation than bad-tempered argument”, although this may not be cruxy.
(epistemic status—typed quickly. Am interested if you disagree with my central point. My examples almost certainly have non-cruxy holes in them)
If you think the Sequences were about inculcating “norms”, then you definitely need to read the original source material, which explicitly denies this! (Norms are made and unmade by other people, but the question of which computations result in accurate beliefs and effective plans is determined by the structure of reality; it’s “law” as in “laws of physics”, not “common law”.)
I disagree. Of course the Sequences are about inculcating social norms, as is almost all large-scale human communication. Norms are how humans actually communicate ideas.
An explanation: when humans think in groups, their thinking is shaped by the norms of the group. e.g: the way one would post to obtain social status and respect on 4chan is different from LessWrong 2.0.
LessWrong’s norms (i.e. explain, don’t persuade. avoid insulting people about their knowledge of source material, get curious about other people’s models, etc) have been built (both in deliberate ways by Habryka/Lightcone, and in more nebulous social ways) to make it easier for people to be better at thinking when they communicate using them.
We can also reverse this. Someone who thoroughly understands LessWrong/rationalist/Yudkowskian/etc norms will find most of the content of the sequences obvious. E.g. the norm “offer concrete predictions” is making beliefs pay rent. the norm “offer concrete models” enables crux-finding. And so on.
Yes, the Sequences say they are about giving you the tools to know which computations result in accurate beliefs. However, the average person who benefits from a specific Yudkowsky essay probably has not read that essay (claim not fully justified in this comment, it’s long enough already). To be more specific, the average person who benefits from the idea “the map is not the territory” has not read the original (Korzybski, 1931). Instead, they read/listened to someone who read/listened to… (some intermediate layers) until we get to an original source.
We call the ideas that propagate throughout a community until they become obvious to most members ‘norms’.
(epistemic status: feeling incredibly insulted. I’ve read the Sequences)
Surely not the only way. A lot of ideas can’t be communicated via norms, because norms don’t have the bandwidth. For an arbitrary example, take the relative state interpretation of quantum mechanics. You definitely need reading and math for that, not just norms.
I don’t think this is true because of the bandwidth issue. The group norms don’t get you to “A Technical Explanation of Technical Explanation”.
That’s a catastrophically bad norm because it degrades propagation of the source material. If someone says something misleading or incorrect about the source material, the way to promote knowledge is to correct them, but that risks insulting them. People who care about the integrity of the source material should want to be corrected (and graciously tolerate some rate of false attempted “corrections” as the price of receiving true corrections).
This seems like a suboptimal norm because it promotes inefficient allocation of attention. If you don’t have the capacity to be curious about everything, you have to prioritize, and if you have to prioritize, that implies being less curious about some people’s models if what you’ve heard from them so far doesn’t seem promising.
opting out of this conversation.
I feel like I have a thesis about how generating cultural information in the modern world involves writing essays where, if you do well, you impact more people who didn’t read the source material than did (e.g. my Korzybski point), and you are ignoring my central point.
Instead, you are aiming to persuade me of your point by using weaknesses in my analogies.
LessWrong is for learning about each other’s models, not for having an argument. We’re having an argument. I’m deliberately not engaging with your most recent points because I don’t want arguments like this on my favourite website.
Wait, now I am curious! Please tell me more about your model that Less Wrong is for learning about each other’s models, not for having an argument. That was actually not my understanding! Where did you learn that? Can you say more about why you think arguments are bad and model-sharing is good? (Maybe focus on the the former if you think the latter is too obvious to need elaboration.)
This is sociologically fascinating. I strong-upvoted your comment and will strong-upvote any more explanation you can give me.
According to me (and potentially nobody else), I view LessWrong as a place where we do argument in the truth-building/philosophical/debate sense, and not in the shouting match sense. I think there is a way where we can do argument that works, but the above was not working for me.
[I felt like the above was getting into “shouting match” territory more than “squishing our different models of the situation together and attempting to do our best to get to Aumann’s Agreement Theorem in real life.
(note this is mostly because I noticed myself getting defensive in my own head—your comments may have worked perfectly well on the same words posted by someone else).]
According to me, this is good. The reason that comes to mind is “we want LessWrong to be a place of repeated idea exchange, and therefore people getting alienated is bad because then they might stop posting—model-sharing leads to much less alienation than bad-tempered argument”, although this may not be cruxy.
(epistemic status—typed quickly. Am interested if you disagree with my central point. My examples almost certainly have non-cruxy holes in them)