Executive Summary: LessWrong 2.0 as it actually exists runs at Bus Factor Habryka, and this is probably fine.
(epistemic status: I notice my thesis is confused, but want comments on it anyway. Writing a long comment since I don’t have time to write a shorter one.)
If we compare this post directly to LessWrong, things become less clear to me, because I’m not certain which elements of LessWrong are designed to persist.
When we look at LessWrong 1.0 (before my time), then, as described by “what Alex of LessWrong 2.0 believes about history”, it consists of (i) The Sequences, which are widely read (tangent: probably not-required-reading, in that many modern community members pick up the norms without having read the original source material) and also (ii) comments on blog posts, which are archived, but ~nobody reads them. (tangent: I believe archiving the comments is strongly good for ‘it makes the information environment better for future historians and regardless of whether we expect these future historians to actually exist, it’s good for humans to act as if there will be a Future and be pro-social in relation to it’).
LessWrong 2.0 is the site that Oliver is the chief moderator of. This is your walled garden. It’s a good garden. I spend a lot of time here. It is inexorably linked with Lighthaven (technically separate) and Lightcone (umbrella org) and the surrounding community. However, unlike Washington and the US Government, or the French and their revolutions, where the institution is the mechanisms of state, and it is designed to be a legible and predictable monopoly over violence that persists for generations, I don’t see why we have to do the same?
You have short timelines (meaning <20yrs with high probability, <40yrs with very high probability) (epistemic status: if this is wrong I have to throw out a lot of my models of the world).
My Oliver-model believes the current version of LessWrong 2.0 (with “bus factor 1=Ollie”) can persist for as long as is relevant pre-AGI/ASI, unless the world drastically changes in ways he doesn’t expect.
Therefore I’m not sure you need a succession plan? You don’t need to defend the information ecosystem of LessWrong forever. This institution doesn’t need to last forever—I certainly hope “we win AI”, and we can continue with a “LessWrong 3.0″ that is similar and yet also different, but LessWrong 2.0 doesn’t need to be that.
probably not-required-reading, in that many modern community members pick up the norms without having read the original source material
If you think the Sequences were about inculcating “norms”, then you definitely need to read the original source material, which explicitly denies this! (Norms are made and unmade by other people, but the question of which computations result in accurate beliefs and effective plans is determined by the structure of reality; it’s “law” as in “laws of physics”, not “common law”.)
I disagree. Of course the Sequences are about inculcating social norms, as is almost all large-scale human communication. Norms are how humans actually communicate ideas.
An explanation: when humans think in groups, their thinking is shaped by the norms of the group. e.g: the way one would post to obtain social status and respect on 4chan is different from LessWrong 2.0.
LessWrong’s norms (i.e. explain, don’t persuade. avoid insulting people about their knowledge of source material, get curious about other people’s models, etc) have been built (both in deliberate ways by Habryka/Lightcone, and in more nebulous social ways) to make it easier for people to be better at thinking when they communicate using them.
We can also reverse this. Someone who thoroughly understands LessWrong/rationalist/Yudkowskian/etc norms will find most of the content of the sequences obvious. E.g. the norm “offer concrete predictions” is making beliefs pay rent. the norm “offer concrete models” enables crux-finding. And so on.
Yes, the Sequences say they are about giving you the tools to know which computations result in accurate beliefs. However, the average person who benefits from a specific Yudkowsky essay probably has not read that essay (claim not fully justified in this comment, it’s long enough already). To be more specific, the average person who benefits from the idea “the map is not the territory” has not read the original (Korzybski, 1931). Instead, they read/listened to someone who read/listened to… (some intermediate layers) until we get to an original source.
We call the ideas that propagate throughout a community until they become obvious to most members ‘norms’.
(epistemic status: feeling incredibly insulted. I’ve read the Sequences)
Surely not the only way. A lot of ideas can’t be communicated via norms, because norms don’t have the bandwidth. For an arbitrary example, take the relative state interpretation of quantum mechanics. You definitely need reading and math for that, not just norms.
Someone who thoroughly understands LessWrong/rationalist/Yudkowskian/etc norms will find most of the content of the sequences obvious.
avoid insulting people about their knowledge of source material
That’s a catastrophically bad norm because it degrades propagation of the source material. If someone says something misleading or incorrect about the source material, the way to promote knowledge is to correct them, but that risks insulting them. People who care about the integrity of the source material should want to be corrected (and graciously tolerate some rate of false attempted “corrections” as the price of receiving true corrections).
get curious about other people’s models
This seems like a suboptimal norm because it promotes inefficient allocation of attention. If you don’t have the capacity to be curious about everything, you have to prioritize, and if you have to prioritize, that implies being less curious about some people’s models if what you’ve heard from them so far doesn’t seem promising.
I feel like I have a thesis about how generating cultural information in the modern world involves writing essays where, if you do well, you impact more people who didn’t read the source material than did (e.g. my Korzybski point), and you are ignoring my central point.
Instead, you are aiming to persuade me of your point by using weaknesses in my analogies.
LessWrong is for learning about each other’s models, not for having an argument. We’re having an argument. I’m deliberately not engaging with your most recent points because I don’t want arguments like this on my favourite website.
Wait, now I am curious! Please tell me more about your model that Less Wrong is for learning about each other’s models, not for having an argument. That was actually not my understanding! Where did you learn that? Can you say more about why you think arguments are bad and model-sharing is good? (Maybe focus on the the former if you think the latter is too obvious to need elaboration.)
This is sociologically fascinating. I strong-upvoted your comment and will strong-upvote any more explanation you can give me.
According to me (and potentially nobody else), I view LessWrong as a place where we do argument in the truth-building/philosophical/debate sense, and not in the shouting match sense. I think there is a way where we can do argument that works, but the above was not working for me.
[I felt like the above was getting into “shouting match” territory more than “squishing our different models of the situation together and attempting to do our best to get to Aumann’s Agreement Theorem in real life. (note this is mostly because I noticed myself getting defensive in my own head—your comments may have worked perfectly well on the same words posted by someone else).]
According to me, this is good. The reason that comes to mind is “we want LessWrong to be a place of repeated idea exchange, and therefore people getting alienated is bad because then they might stop posting—model-sharing leads to much less alienation than bad-tempered argument”, although this may not be cruxy.
(epistemic status—typed quickly. Am interested if you disagree with my central point. My examples almost certainly have non-cruxy holes in them)
Executive Summary: LessWrong 2.0 as it actually exists runs at Bus Factor Habryka, and this is probably fine.
(epistemic status: I notice my thesis is confused, but want comments on it anyway. Writing a long comment since I don’t have time to write a shorter one.)
If we compare this post directly to LessWrong, things become less clear to me, because I’m not certain which elements of LessWrong are designed to persist.
When we look at LessWrong 1.0 (before my time), then, as described by “what Alex of LessWrong 2.0 believes about history”, it consists of (i) The Sequences, which are widely read (tangent: probably not-required-reading, in that many modern community members pick up the norms without having read the original source material) and also (ii) comments on blog posts, which are archived, but ~nobody reads them. (tangent: I believe archiving the comments is strongly good for ‘it makes the information environment better for future historians and regardless of whether we expect these future historians to actually exist, it’s good for humans to act as if there will be a Future and be pro-social in relation to it’).
LessWrong 2.0 is the site that Oliver is the chief moderator of. This is your walled garden. It’s a good garden. I spend a lot of time here. It is inexorably linked with Lighthaven (technically separate) and Lightcone (umbrella org) and the surrounding community. However, unlike Washington and the US Government, or the French and their revolutions, where the institution is the mechanisms of state, and it is designed to be a legible and predictable monopoly over violence that persists for generations, I don’t see why we have to do the same?
You have short timelines (meaning <20yrs with high probability, <40yrs with very high probability) (epistemic status: if this is wrong I have to throw out a lot of my models of the world).
My Oliver-model believes the current version of LessWrong 2.0 (with “bus factor 1=Ollie”) can persist for as long as is relevant pre-AGI/ASI, unless the world drastically changes in ways he doesn’t expect.
Therefore I’m not sure you need a succession plan? You don’t need to defend the information ecosystem of LessWrong forever. This institution doesn’t need to last forever—I certainly hope “we win AI”, and we can continue with a “LessWrong 3.0″ that is similar and yet also different, but LessWrong 2.0 doesn’t need to be that.
If you think the Sequences were about inculcating “norms”, then you definitely need to read the original source material, which explicitly denies this! (Norms are made and unmade by other people, but the question of which computations result in accurate beliefs and effective plans is determined by the structure of reality; it’s “law” as in “laws of physics”, not “common law”.)
I disagree. Of course the Sequences are about inculcating social norms, as is almost all large-scale human communication. Norms are how humans actually communicate ideas.
An explanation: when humans think in groups, their thinking is shaped by the norms of the group. e.g: the way one would post to obtain social status and respect on 4chan is different from LessWrong 2.0.
LessWrong’s norms (i.e. explain, don’t persuade. avoid insulting people about their knowledge of source material, get curious about other people’s models, etc) have been built (both in deliberate ways by Habryka/Lightcone, and in more nebulous social ways) to make it easier for people to be better at thinking when they communicate using them.
We can also reverse this. Someone who thoroughly understands LessWrong/rationalist/Yudkowskian/etc norms will find most of the content of the sequences obvious. E.g. the norm “offer concrete predictions” is making beliefs pay rent. the norm “offer concrete models” enables crux-finding. And so on.
Yes, the Sequences say they are about giving you the tools to know which computations result in accurate beliefs. However, the average person who benefits from a specific Yudkowsky essay probably has not read that essay (claim not fully justified in this comment, it’s long enough already). To be more specific, the average person who benefits from the idea “the map is not the territory” has not read the original (Korzybski, 1931). Instead, they read/listened to someone who read/listened to… (some intermediate layers) until we get to an original source.
We call the ideas that propagate throughout a community until they become obvious to most members ‘norms’.
(epistemic status: feeling incredibly insulted. I’ve read the Sequences)
Surely not the only way. A lot of ideas can’t be communicated via norms, because norms don’t have the bandwidth. For an arbitrary example, take the relative state interpretation of quantum mechanics. You definitely need reading and math for that, not just norms.
I don’t think this is true because of the bandwidth issue. The group norms don’t get you to “A Technical Explanation of Technical Explanation”.
That’s a catastrophically bad norm because it degrades propagation of the source material. If someone says something misleading or incorrect about the source material, the way to promote knowledge is to correct them, but that risks insulting them. People who care about the integrity of the source material should want to be corrected (and graciously tolerate some rate of false attempted “corrections” as the price of receiving true corrections).
This seems like a suboptimal norm because it promotes inefficient allocation of attention. If you don’t have the capacity to be curious about everything, you have to prioritize, and if you have to prioritize, that implies being less curious about some people’s models if what you’ve heard from them so far doesn’t seem promising.
opting out of this conversation.
I feel like I have a thesis about how generating cultural information in the modern world involves writing essays where, if you do well, you impact more people who didn’t read the source material than did (e.g. my Korzybski point), and you are ignoring my central point.
Instead, you are aiming to persuade me of your point by using weaknesses in my analogies.
LessWrong is for learning about each other’s models, not for having an argument. We’re having an argument. I’m deliberately not engaging with your most recent points because I don’t want arguments like this on my favourite website.
Wait, now I am curious! Please tell me more about your model that Less Wrong is for learning about each other’s models, not for having an argument. That was actually not my understanding! Where did you learn that? Can you say more about why you think arguments are bad and model-sharing is good? (Maybe focus on the the former if you think the latter is too obvious to need elaboration.)
This is sociologically fascinating. I strong-upvoted your comment and will strong-upvote any more explanation you can give me.
According to me (and potentially nobody else), I view LessWrong as a place where we do argument in the truth-building/philosophical/debate sense, and not in the shouting match sense. I think there is a way where we can do argument that works, but the above was not working for me.
[I felt like the above was getting into “shouting match” territory more than “squishing our different models of the situation together and attempting to do our best to get to Aumann’s Agreement Theorem in real life.
(note this is mostly because I noticed myself getting defensive in my own head—your comments may have worked perfectly well on the same words posted by someone else).]
According to me, this is good. The reason that comes to mind is “we want LessWrong to be a place of repeated idea exchange, and therefore people getting alienated is bad because then they might stop posting—model-sharing leads to much less alienation than bad-tempered argument”, although this may not be cruxy.
(epistemic status—typed quickly. Am interested if you disagree with my central point. My examples almost certainly have non-cruxy holes in them)