my model of Duncan predicts that there are some people on LW whose presence here is motivated (at least significantly in part) by wanting to grow as a rationalist,
Jennifer:
I think that it is likely that neither Duncan nor I likely consider ourselves in the first category.
Duncan, in the OP, which Jennifer I guess skimmed:
What I really want from LessWrong is to make my own thinking better, moment to moment. To be embedded in a context that evokes clearer thinking, the way being in a library evokes whispers. To be embedded in a context that anti-evokes all those things my brain keeps trying to do, the way being in a church anti-evokes coarse language.
I see that you have, in fact, caught me in a simplification that is not consistent with literally everything you said.
I apologize for over-simplifying, maybe I should have added “primarily” and/or “currently” to make it more literally true.
In my defense, and to potentially advance the conversation, you also did say this, and I quoted it rather than paraphrasing because I wanted to not put words in your mouth while you were in a potentially adversarial mood… maybe looking to score points for unfairness?
What I’m getting out of LessWrong these days is readership. It’s a great place to come and share my thoughts, and have them be seen by people—smart and perceptive people, for the most part, who will take those thoughts seriously, and supply me with new thoughts in return, many of which I honestly wouldn’t have ever come to on my own.
My model here is that this is your self-identified “revealed preference” for actually being here right now.
Also, in my experience, revealed preferences are very very very important signals about the reality of situations and the reality of people.
This plausible self-described revealed preference of yours suggests to me that you see yourself as more of a teacher than a student. More of a producer than a consumer. (This would be OK in my book. I explicitly acknowledge that I see my self as more of a teacher than a student round these parts. I’m not accusing you of something bad here, in my own normative frame, though perhaps you feel it as an attack because you have difference values and norms than I do?)
It is fully possible, I guess, (and you would be able to say this much better than I) that you would actually rather be a student than a teacher?
And it might be that that you see this as being impossible until or unless LW moves from a rabbit equilibrium to a stag equilibrium?
...
There’s an interesting possible equivocation here.
(1) “Duncan growing as a rationalist as much and fast as he (can/should/does?) (really?) want does in fact require a rabbit-to-stag nash equilibrium shift among all of lesswrong”.
(2) “Duncan growing as a rationalist as much as and fast as he wants does seems to him to require a rabbit-to-stag nash equilibrium shift among all of lesswrong… which might then logically universally require removing literally every rabbit player from the game, either by conversion to playing stag or banning”.
These are very similar. I like having them separate so that I can agree and disagree with you <3
Also, consider then a third idea:
(3) A rabbit-to-stag nash equilibrium shift among all of lesswrong is wildly infeasible because of new arrivals, and the large number of people in-and-around lesswrong, and the complexity of the normative demands that would be made on all these people, and various other reasons.
I think that you probably think 1 and 2 are true and 3 is false.
I think that 2 is true, and 3 is true.
Because I think 3 is true, I think your implicit(?) proposals would likely be very costly up front while having no particularly large benefits on the backend (despite hopes/promises of late arriving large benefits).
Because I think 2 is true, I think you’re motivated to attempt this wildly infeasible plan and thereby cause harm to something I care about.
In my opinion, if 1 is really true, then you should give up on lesswrong as being able to meet this need, and also give up on any group that is similarly large and lacking in modular sub-communities, and lacking in gates, and lacking in an adequate intake curricula with post tests that truly measure mastery, and so on.
If you need growth as a rationalist to be happy, AND its current shape (vis-a-vis stage hunts etc) means this website is a place that can’t meet that need, THEN (maybe?) you need to get those needs met somewhere else.
For what its worth, I think that 1 is false for many many people, and probably it is also false for you.
I don’t think you should leave, I just think you should be less interested in a “pro-stag-hunting jihad” and then I think you should get the need (that was prompting your stag hunting call) met in some new way.
I think that lesswrong as it currently exists has a shockingly high discourse level compared to most of the rest of the internet, and I think that this is already sufficiently to arm people with the tools they need to read the material, think about it, try it, and start catching really really big rabbits (that is, coming to make truly a part of them some new and true and very useful ideas), and then give rabbit hunting reports, and to share rabbit hunting techniques, and so on. There’s a virtuous cycle here potentially!
In my opinion, such a “skill building in rabbit hunting techniques” sort of rationality… is all that can be done in an environment like this.
Also I think this kind of teaching environment is less available in many places, and so it isn’t that this place is bad for not offering more, it is more that it is only “better by comparison to many alternatives” while still failing to hit the ideal. (And maybe you just yearn really hard for something more ideal.)
So in my model (where 2 is true) “because 1 is false for many (and maybe even for you)” and 3 is true… therefore your whole stag hunt concept, applied here, suggests to me that you’re “low key seeking to gain social permission” from lesswrong to drive out the rabbit hunters and silence the rabbit hunting teachers and make this place wildly different.
I think it would de facto (even if this is not what you intend) become a more normal (and normally bad) “place on the internet” full of people semi-mindlessly shrieking at each other by default.
If I might offer a new idea that builds on the above material: lesswrong is actually a pretty darn good hub for a quite a few smaller but similar subcultures.
These subcultures often enable larger quantities of shared normative material, to be shared with much higher density in that little contextual bubble than is possible in larger and more porous discourse environments.
In my mind, Lesswrong itself has a potential function here as being a place to learn that the other subcultures exist, and/or audition for entry or invitation, and so on. This auditioning/discovery role seems highly compatible to me to the “rabbit hunting rationality improvement” function.
In my model, you could have a more valuable-for-others role here on lesswrong if you were more inclined to tolerantly teach without demanding a “level” that was required-at-all to meet your particular educational needs.
To restate: if you have needs that are not being met, perhaps you could treat this website as a staging area and audition space for more specific and more demanding subcultures that take lesswrong’s canon for granted while also tolerating and even encouraging variations… because it certainly isn’t the case that lesswrong is perfect.
(There’s a larger moral thing here: to use lesswrong in a pure way like this might harm lesswrong as all the best people sublimate away to better small communities. I think such people should sometimes return and give back so that lessswrong (in pure “smart person mental elbow grease” and also in memetic diversity) stays over longer periods of time on a trajectory of “getting less wrong over time”… though I don’t know how to get this to happen for sure in a way that makes it a Pareto improvement for returnees and noobs and so on. The institution design challenge here feels like an interesting thing to talk about maybe? Or maybe not <3)
...
So I think that Dragon Army could have been the place that worked the way you wanted it to work, and I can imagine different Everett branches off in the counter-factual distance where Dragon Army started formalizing itself and maybe doing security work for third parties, and so there might be versions of Earth “out there” where Dragon Army is now a mercenary contracting firm with 1000s of employees who are committed to exactly the stag hunting norms that you personally think are correct.
Personally, I would not join that group, but in the spirit of live-and-let-live I wouldn’t complain about it until or unless someone hired that firm to “impose costs” on me… then I would fight back. Also, however, I could imagine sometimes wanting to hire that firm for some things. Violence in service to the maintenance of norms is not always bad… it is just often the “last refuge of the incompetent”.
In the meantime, if some of the officers of that mercenary firm that you could have counter-factually started still sometimes hung out on Lesswrong, and were polite and tolerant and helped people build their rabbit hunting skills (or find subcultures that help them develop whatever other skills might only be possible to develop on groups) then that would be fine with me...
...so long as they don’t damage the “good hubness” of lesswrong itself while doing so (which in my mind is distinct from not damaging lesswrong’s explicitly epistemic norms because having well ordered values is part of not being wrong, and values are sometimes in conflict, and that is often ok… indeed it might be a critical requirement for positive sum pareto improving cooperation in a world full of conservation laws).
… a staging area and audition space for more specific and more demanding subcultures …
Here is a thng I wrote some years ago (this is a slightly cleaned up chat log, apologies for the roughness of exposition):
There was an analogue to this in WoW as well, where, as I think I’ve mentioned, there often was such a thing as “within this raid guild, there are multiple raid groups, including some that are more ‘elite’/exclusive than the main one”; such groups usually did not use the EPGP or other allocation system of the main group, but had their own thing.
(I should note that such smaller, more elite/exclusive groups, typically skewed closer to “managed communism” than to “regulated capitalism” on the spectrum of loot systems, which I do not think is a coincidence.)
“Higher internal trust” is true, but not where I’d locate the cause. I’d say “higher degree of sublimation of personal interest to group interest”.
[name_redacted]: Ah. … More dedicated?
Yes, and more willing to sacrifice for the good of the raid. Like, if you’re trying to maintain a raiding guild of 100 people, keep it functioning and healthy over the course of months or years, new content, people joining and leaving, schedules and life circumstances changing, different personalities and background, etc., then it’s important to maintain member satisfaction; it’s important to ensure that people feel in control and rewarded and appreciated; that they don’t burn out or develop resentments; that no one feels slighted, and no one feels that anyone is favored; you have to recruit, also...
All of these things are more important than being maximally effective at downing this boss right now and then the next five bosses this week.
If you focus on the latter and ignore the former, your guild will break and explode, and people on WoW-related news websites will place stories about your public meltdowns in the Drama section, and laugh at you.
On the other hand… if you get 10 guys together and you go “ok dudes, we, these particular 10 people, are going to show up every single Sunday for several months, play for 6 hours straight each time, and we will push through absolutely the most challenging content in the game, which only a small handful [or sometimes: none at all] of people in the world have done”… that is a different scenario. There’s no room for “I’m not the tank but I want that piece of tank gear”, because if you do that you will fail.
What a group like that promises (which a larger, more skill-diverse, less elite/exclusive, group cannot promise) is the incredible rush of pushing yourself—your concentration, your skill, your endurance, your coordination, your ingenuity—to the maximum, and succeeding at something really really hard as a result.
That is the intrinsic motivation which takes the place of the extrinsic motivation of getting loot. As a result, the extrinsic motivation is no longer a resource which it is vitally important to allocate.
In that scenario, your needs are the group’s needs; the group’s successes are your successes; there is no separation between you and the group, and consequently the need for equity in loot allocation falls away, and everything is allocated strictly by group-level optimization.
Of course, that sort of thing doesn’t scale, and neither can it last, just as you cannot build a whole country like a kibbutz. But it may be entirely possible, and perfectly healthy, to occasionally cleave off subgroups who follow that model, then to meld back into the overgroup at the completion of a project (and never having really separated from it, their members continuing to participate in the overgroup even as they throw themselves into the subproject).
I quoted it rather than paraphrasing because I wanted to not put words in your mouth while you were in a potentially adversarial mood
If a person writes “I currently get A but what I really want is B”
...and then you selectively quote “I currently get A” as justification for summarizing them as being unlikely to want B...
...right after they’ve objected to you strawmanning and misrepresenting them left and right, and made it very clear to you that you are nowhere near passing their ITT...
...this is not “simplification.”
Apologizing for “over-simplifying,” under these circumstances, is a cop-out. The thing you are doing is not over-simplification. You are [not talking about simpler versions of me and my claim that abstract away some of the detail]. You are outright misrepresenting me, and in a way that’s reeeaaalll hard to believe is not adversarial, at this point.
It is at best falling so far short of cooperative discourse as to not even qualify as a member of the set, and at worst deliberate disingenuousness.
If a person wholly misses you once, that’s run-of-the-mill miscommunication.
If, after you point out all the ways they missed you, at length, they brush that off and continue confidently arguing with their cardboard cutout of you, that’s a bad sign.
If, after you again note that they’ve misrepresented you in a crucial fashion, they apologize for “over-simplifying,” they’ve demonstrated that there’s no point in trying to engage with them.
I explicitly acknowledge that I see my self as more of a teacher than a student round these parts.
I’m torn about getting into this one, since on one hand it doesn’t seem like you’re really enjoying this conversation or would be excited to continue it, and I don’t like the idea of starting conversations that feel like a drain before they even get started. In addition, other than liking my other comment on this post, you don’t really know me and therefore I don’t really have the respect/trust resources I’d normally lean on for difficult conversations like this (both in the “likely emotionally significant” and also “just large inferential distances with few words” senses).
On the other hand I think there’s something very important here, both on the object level and on a meta level about how this conversation is going so far. And if it does turn out to be a conversation you’re interested in having (either now, or in a month, or whenever), I do expect it to be actually quite productive.
If you’re interested, here’s where I’m starting:
Jennifer has explicitly stated that at this point her goal is to help you. This doesn’t seem to have happened. While it’s important to track possibilities like “Actually, it’s been more helpful than it looks”, it looks more like her attempt(s) so far have failed, and this implies that she’s missing something.
Do you have a model that gives any specific predictions about what it might be? Regardless of whether it’s worth the effort or whether doing so would lead to bad consequences in other ways, do you have a model that gives specific predictions of what it would take to convey to her the thing(s) she’s missing such that the conversation with her would go much more like you think it should, should you decide it to be worthwhile?
Would you be interested in hearing the predictions my models give?
I don’t have a gearsy model, no. All I’ve got is the observations that:
Duncan’s post objects to a cluster of things X, Y, and Z
Jennifer’s response seems to me to state that X, Y, and Z are either not worth objecting to or possibly are actually good
Jennifer’s response exhibits X, Y, and Z in substantial quantity (which, to be fair, is consistent with principled disagreement, i.e. is not a sign of hypocrisy or lack-of-skill or whatever)
Duncan’s objections to X, Y, and Z within Jennifer’s pushback are basically falling on deaf ears, resulting in Jennifer adding more X, Y, and Z in subsequent responses
As is to be expected, given that the whole motivation for the OP was “LessWrong keeps indulging in and upvoting X, Y, and Z,” Jennifer’s being upvoted.
I’m interested in hearing both your model and your predictions. Perhaps a timescale of days-weeks is better than a timescale of hours-days.
There’s a lot here, and I’ve put in a lot of work writing and rewriting. After failing for long enough to put things in a way that is both succinct and clear, I’m going to abandon hopes of the latter and go all in on the former. I’m going to use the minimal handles for the concepts I refer to, in a way similar to using LW jargon like “steelman” without the accompanying essays, in hopes that the terms are descriptive enough on their own. If this ends up being too opaque, I can explicate as needed later.
Here’s an oversimplified model to play with:
Changing minds requires attention, and bigger changes require more attentions.
Bidding for bigger attention requires bigger respect, or else no reason to follow.
Bidding for bigger respect requires bigger security, or else not safe enough to risk following.
Bidding for that sense of security requires proof of actual security, or else people react defensively, cooperation isn’t attended to, and good things don’t happen
GWS took an approach of offering proof of security and making fairly modest bids for both security and respect. As a result, the message was accepted, but it was fairly restrained in what it attempted to communicate. For example, GWS explicitly says “I do not expect that I would give you the type of feedback that Jennifer has given you here (i.e. the question the validity of your thesis variety).”
Jennifer, on the other hand, went full bore, commanding attention to places which demand lots of respect if they are to be followed, while offering little in return*. As a result, accepting this bid also requires a large degree of security, and she offered no proof that her attacks on Duncan’s ideas (it feels weird addressing you in the third person given that I am addressing this primarily to you, but it seems like it’s better looked at from an outside perspective?) would be limited to that which wouldn’t harm Duncan’s social standing here. This makes the whole bid very hard to accept, and so it was not accepted, and Duncan gave high heat responses instead.
Bolder bids like that make for much quicker work when accepted, so there is good reason to be as bold as your credit allows. One complicating factor here is that the audience is mixed, and overbidding for Duncan himself doesn’t necessarily mean the message doesn’t get through to others, so there is a trade off here between “Stay sufficiently non-threatening to maintain an open channel of cooperation with Duncan” and “Credibly convey the serious problems with Duncan’s thesis, as I see them, to all those willing to follow”.
Later, she talks about wanting to help Duncan specifically, and doesn’t seem to have done so. There are a few possible explanations for this.
1) When she said it, there might have been an implied “[I’m only going to put in a certain level of work to make things easy to hear, and beyond that I’m willing to fail]”. In this branch, the conversation between Duncan and Jennifer is going nowhere unless Duncan decides to accept at least the first bid of security. If Duncan responds without heat (and feeling heated but attempting to screen it off doesn’t count), the negotiation can pick up on the topic of whether Jennifer is worthy of that level of respect, or further up if that is granted too.
2) It’s possible that she lacks a good and salient picture of what it looks like to recover from over-bidding, and just doesn’t have a map to follow. In this branch, demonstrating what that might look like would likely result in her doing it and recovering things. In particular, this means pacing Duncan’s objections without (necessarily) agreeing with them until Duncan feels that she has passed his ITT and trusts her intent to cooperate and collaborate rather than to tear him down.
3) It could also be that she’s got her own little hang up on the issue of “respect”, which caused a blind spot here. I put an asterisk there earlier, because she was only showing “little respect” in one sense, while showing a lot in another. If you say to someone “Lol, your ideas are dumb”, it’s not showing a lot of respect for those ideas of theirs. To the extent that they afford those same ideas a lot of respect, it sounds a lot like not respecting them, since you’re also shitting on their idea of how valuable those ideas are and therefore their judgement itself. However, if you say to someone “Lol, your ideas are dumb” because you expect them to be able to handle such overt criticism and either agree or prove you wrong, then it is only tentatively disrespectful of those ideas and exceptionally and unusually respectful of the person themselves.
She explicitly points at this when she says “Duncan is a special case. I’m not treating him like a student, I’m treating him like an equal”, and then hints at a blind spot when she says (emphasis her own) “who should be able to manage himself and his own emotions”—translating to my model, “manage himself and his emotions” means finding security and engaging with the rest of the bids on their own merits unobstructed by defensive heat. “Should” often points at a willful refusal to update ones map to what “is”, and instead responding to it by flinching at what isn’t as it “should” be. This isn’t necessarily a mistake (in the same way that flinching away from a hot stove isn’t a mistake), and while she does make other related comments elsewhere in the thread, there’s no clear indication of whether this is a mistake or a deliberate decision to limit her level of effort there. If it is a mistake, then it’s likely “I don’t like having to admit that people don’t demonstrate as much security as I think they should, and I don’t wanna admit that it’s a thing that is going to stay real and problematic even when I flinch at it”. Another prediction is that to the extent that it is this, and she reads this comment, this error will go away.
I don’t want to confuse my personal impression with the conditional predictions of the model itself, but I do think it’s worth noting that I personally would grant the bid for respect. Last time I laughed off something that she didn’t agree should be laughed off, it took me about five years to realize that I was wrong. Oops.
The same stuff that’s outlined in the post, both up at the top where I list things my brain tries to do, and down at the bottom where I say “just the basics, consistently done.”
Regenerating the list again:
Engaging in, and tolerating/applauding those who engage in:
Strawmanning (misrepresenting others’ points as weaker or more extreme than they are)
Projection (speaking as if you know what’s going on inside other people’s heads)
Putting little to no effort into distinguishing your observations from your inferences/speaking as if things definitely are what they seem to you to be
Only having or tracking a single hypothesis/giving no signal that there is more than one explanation possible for what you’ve observed
Overstating the strength of your claims
Being much quieter in one’s updates and oopses than one was in one’s bold wrongness
Weaponizing equivocation/doing motte-and-bailey
Generally, doing things which make it harder rather than easier for people to see clearly and think clearly and engage with your argument and move toward the truth
Mechanistically… since stag hunt is in the title of the post… it seems like you’re saying that any one person committing “enough of these epistemic sins to count as playing stag” would mean that all of lesswrong fails at the stag hunt, right?
And it might be the case that a single person playing stag could be made up of them failing at even just a single one of these sins? (This is the weakest point in my mechanistic model, perhaps?)
Also, what you’re calling “projection” there is not the standard model of projection I think? And my understanding is that the standard model of projection is sort of explicitly something people can’t choose not to do, by default. In the standard model of projection it takes a lot of emotional and intellectual work for a person to realize that they are blaming others for problems that are really inside themselves :-(
The practical upshot here, to me, is that if the models you’re advocating here are true, then it seems to me like lesswrong will inevitably fail at “hunting stags”.
...
And yet it also seems like you’re exhorting people to stop committing these sins and exhorting them moreover to punitively downvote people according to these standards because if LW voters become extremely judgemental like this then… maybe we will eventually all play stag and thus eventually, as a group, catch a stag?
So under the models that you seem to me to have offered, the (numerous individual) costs won’t buy any (group) benefits? I think?
There will always inevitably be a fly in the ointment… a grain of sand in the chip fab… a student among the masters… and so the stag hunt will always fail unless it occurs in extreme isolation with a very small number of moving parts of very high quality?
And yet lesswrong will hopefully always have an influx of new people who are imperfect, but learning and getting better!
And that’s (in my book) quite good… even if it means we will always fail at hunting stags.
...
The thing I think that’s good about lesswrong has almost nothing to do with bringing down a stag on this actual website.
Instead, the thing I think is good about lesswrong has to do with creating a stable pipeline of friendly people who are all, over time, are getting a little better at thinking, so they can “do more good thinking” in their lives, and businesses, and non-profits, and perhaps from within government offices, and so on.
I’m (I hope) realistically hoping for lots of little improvements, in relative isolation, based on cross-fertilization among cool people, with tolerance for error, and sharing of ideas, and polishing stuff over time… Not from one big leap based on purified perfect cooperation (which is impossible anyway for large groups).
You’re against “engaging in, and tolerating/applauding” lots and lots of stuff, while I think that most of the actual goodness arises specifically from our tolerant engagement of people making incremental progress, and giving them applause for any such incremental improvements, despite our numerous inevitable imperfections.
I am confused by a theme in your comments. You have repeatedly chosen to express that the failure of a single person completely destroys all the value of the website, even going so far as to quote ridiculous numbers (at the order of E-18 [1]) in support of this.
The only model I have for your behavior that explains why you would do this, instead of assuming something like Duncan believing something like “The value of C cooperators and D defectors is min(0,C−D2)” is that you are trying to make the argument look weak. If there is another reason to do this, I’d appreciate an explanation, because this tactic alone is enough to make me view the argument as likely adversarial.
Mechanistically… since stag hunt is in the title of the post… it seems like you’re saying that any one person committing “enough of these epistemic sins to count as playing stag” would mean that all of lesswrong fails at the stag hunt, right?
No, and if you had stopped there and let me answer rather than going on to write hundreds of words based on your misconception, I would have found it more credible that you actually wanted to engage with me and converge on something, rather than that you just really wanted to keep spamming misrepresentations of my point in the form of questions.
Epistemic status: socially brusque wild speculation. If they’re in the area and it wouldn’t be high effort, I’d like JenniferRM’s feedback on how close I am.
My model of JenniferRM isn’t of someone who wants to spam misrepresentations in the form of questions. In response to Dweomite’s comment below, they say:
It was a purposefully pointed and slightly unfair question. I didn’t predict that Duncan would be able to answer it well (though I hoped he would chill out give a good answer and then we could high five, or something).
If he answered in various bad ways (that I feared/predicted), then I was ready with secondary and tertiary criticisms.
My model of the model which which outputs words like these is that they’re very confident in their own understanding—viewing themself as a “teacher” rather than a student—and are trying to lead someone who they think doesn’t understand by the nose through a conversation which has been plotted out in advance.
dxu:
Jennifer:
Duncan, in the OP, which Jennifer I guess skimmed:
I see that you have, in fact, caught me in a simplification that is not consistent with literally everything you said.
I apologize for over-simplifying, maybe I should have added “primarily” and/or “currently” to make it more literally true.
In my defense, and to potentially advance the conversation, you also did say this, and I quoted it rather than paraphrasing because I wanted to not put words in your mouth while you were in a potentially adversarial mood… maybe looking to score points for unfairness?
My model here is that this is your self-identified “revealed preference” for actually being here right now.
Also, in my experience, revealed preferences are very very very important signals about the reality of situations and the reality of people.
This plausible self-described revealed preference of yours suggests to me that you see yourself as more of a teacher than a student. More of a producer than a consumer. (This would be OK in my book. I explicitly acknowledge that I see my self as more of a teacher than a student round these parts. I’m not accusing you of something bad here, in my own normative frame, though perhaps you feel it as an attack because you have difference values and norms than I do?)
It is fully possible, I guess, (and you would be able to say this much better than I) that you would actually rather be a student than a teacher?
And it might be that that you see this as being impossible until or unless LW moves from a rabbit equilibrium to a stag equilibrium?
...
There’s an interesting possible equivocation here.
(1) “Duncan growing as a rationalist as much and fast as he (can/should/does?) (really?) want does in fact require a rabbit-to-stag nash equilibrium shift among all of lesswrong”.
(2) “Duncan growing as a rationalist as much as and fast as he wants does seems to him to require a rabbit-to-stag nash equilibrium shift among all of lesswrong… which might then logically universally require removing literally every rabbit player from the game, either by conversion to playing stag or banning”.
These are very similar. I like having them separate so that I can agree and disagree with you <3
Also, consider then a third idea:
(3) A rabbit-to-stag nash equilibrium shift among all of lesswrong is wildly infeasible because of new arrivals, and the large number of people in-and-around lesswrong, and the complexity of the normative demands that would be made on all these people, and various other reasons.
I think that you probably think 1 and 2 are true and 3 is false.
I think that 2 is true, and 3 is true.
Because I think 3 is true, I think your implicit(?) proposals would likely be very costly up front while having no particularly large benefits on the backend (despite hopes/promises of late arriving large benefits).
Because I think 2 is true, I think you’re motivated to attempt this wildly infeasible plan and thereby cause harm to something I care about.
In my opinion, if 1 is really true, then you should give up on lesswrong as being able to meet this need, and also give up on any group that is similarly large and lacking in modular sub-communities, and lacking in gates, and lacking in an adequate intake curricula with post tests that truly measure mastery, and so on.
If you need growth as a rationalist to be happy, AND its current shape (vis-a-vis stage hunts etc) means this website is a place that can’t meet that need, THEN (maybe?) you need to get those needs met somewhere else.
For what its worth, I think that 1 is false for many many people, and probably it is also false for you.
I don’t think you should leave, I just think you should be less interested in a “pro-stag-hunting jihad” and then I think you should get the need (that was prompting your stag hunting call) met in some new way.
I think that lesswrong as it currently exists has a shockingly high discourse level compared to most of the rest of the internet, and I think that this is already sufficiently to arm people with the tools they need to read the material, think about it, try it, and start catching really really big rabbits (that is, coming to make truly a part of them some new and true and very useful ideas), and then give rabbit hunting reports, and to share rabbit hunting techniques, and so on. There’s a virtuous cycle here potentially!
In my opinion, such a “skill building in rabbit hunting techniques” sort of rationality… is all that can be done in an environment like this.
Also I think this kind of teaching environment is less available in many places, and so it isn’t that this place is bad for not offering more, it is more that it is only “better by comparison to many alternatives” while still failing to hit the ideal. (And maybe you just yearn really hard for something more ideal.)
So in my model (where 2 is true) “because 1 is false for many (and maybe even for you)” and 3 is true… therefore your whole stag hunt concept, applied here, suggests to me that you’re “low key seeking to gain social permission” from lesswrong to drive out the rabbit hunters and silence the rabbit hunting teachers and make this place wildly different.
I think it would de facto (even if this is not what you intend) become a more normal (and normally bad) “place on the internet” full of people semi-mindlessly shrieking at each other by default.
If I might offer a new idea that builds on the above material: lesswrong is actually a pretty darn good hub for a quite a few smaller but similar subcultures.
These subcultures often enable larger quantities of shared normative material, to be shared with much higher density in that little contextual bubble than is possible in larger and more porous discourse environments.
In my mind, Lesswrong itself has a potential function here as being a place to learn that the other subcultures exist, and/or audition for entry or invitation, and so on. This auditioning/discovery role seems highly compatible to me to the “rabbit hunting rationality improvement” function.
In my model, you could have a more valuable-for-others role here on lesswrong if you were more inclined to tolerantly teach without demanding a “level” that was required-at-all to meet your particular educational needs.
To restate: if you have needs that are not being met, perhaps you could treat this website as a staging area and audition space for more specific and more demanding subcultures that take lesswrong’s canon for granted while also tolerating and even encouraging variations… because it certainly isn’t the case that lesswrong is perfect.
(There’s a larger moral thing here: to use lesswrong in a pure way like this might harm lesswrong as all the best people sublimate away to better small communities. I think such people should sometimes return and give back so that lessswrong (in pure “smart person mental elbow grease” and also in memetic diversity) stays over longer periods of time on a trajectory of “getting less wrong over time”… though I don’t know how to get this to happen for sure in a way that makes it a Pareto improvement for returnees and noobs and so on. The institution design challenge here feels like an interesting thing to talk about maybe? Or maybe not <3)
...
So I think that Dragon Army could have been the place that worked the way you wanted it to work, and I can imagine different Everett branches off in the counter-factual distance where Dragon Army started formalizing itself and maybe doing security work for third parties, and so there might be versions of Earth “out there” where Dragon Army is now a mercenary contracting firm with 1000s of employees who are committed to exactly the stag hunting norms that you personally think are correct.
Personally, I would not join that group, but in the spirit of live-and-let-live I wouldn’t complain about it until or unless someone hired that firm to “impose costs” on me… then I would fight back. Also, however, I could imagine sometimes wanting to hire that firm for some things. Violence in service to the maintenance of norms is not always bad… it is just often the “last refuge of the incompetent”.
In the meantime, if some of the officers of that mercenary firm that you could have counter-factually started still sometimes hung out on Lesswrong, and were polite and tolerant and helped people build their rabbit hunting skills (or find subcultures that help them develop whatever other skills might only be possible to develop on groups) then that would be fine with me...
...so long as they don’t damage the “good hubness” of lesswrong itself while doing so (which in my mind is distinct from not damaging lesswrong’s explicitly epistemic norms because having well ordered values is part of not being wrong, and values are sometimes in conflict, and that is often ok… indeed it might be a critical requirement for positive sum pareto improving cooperation in a world full of conservation laws).
Here is a thng I wrote some years ago (this is a slightly cleaned up chat log, apologies for the roughness of exposition):
Yeah! This is great. This is the kind of detailed grounded cooperative reality that really happens sometimes :-)
If a person writes “I currently get A but what I really want is B”
...and then you selectively quote “I currently get A” as justification for summarizing them as being unlikely to want B...
...right after they’ve objected to you strawmanning and misrepresenting them left and right, and made it very clear to you that you are nowhere near passing their ITT...
...this is not “simplification.”
Apologizing for “over-simplifying,” under these circumstances, is a cop-out. The thing you are doing is not over-simplification. You are [not talking about simpler versions of me and my claim that abstract away some of the detail]. You are outright misrepresenting me, and in a way that’s reeeaaalll hard to believe is not adversarial, at this point.
It is at best falling so far short of cooperative discourse as to not even qualify as a member of the set, and at worst deliberate disingenuousness.
If a person wholly misses you once, that’s run-of-the-mill miscommunication.
If, after you point out all the ways they missed you, at length, they brush that off and continue confidently arguing with their cardboard cutout of you, that’s a bad sign.
If, after you again note that they’ve misrepresented you in a crucial fashion, they apologize for “over-simplifying,” they’ve demonstrated that there’s no point in trying to engage with them.
I find this unpromising, in light of the above.
I’m torn about getting into this one, since on one hand it doesn’t seem like you’re really enjoying this conversation or would be excited to continue it, and I don’t like the idea of starting conversations that feel like a drain before they even get started. In addition, other than liking my other comment on this post, you don’t really know me and therefore I don’t really have the respect/trust resources I’d normally lean on for difficult conversations like this (both in the “likely emotionally significant” and also “just large inferential distances with few words” senses).
On the other hand I think there’s something very important here, both on the object level and on a meta level about how this conversation is going so far. And if it does turn out to be a conversation you’re interested in having (either now, or in a month, or whenever), I do expect it to be actually quite productive.
If you’re interested, here’s where I’m starting:
Jennifer has explicitly stated that at this point her goal is to help you. This doesn’t seem to have happened. While it’s important to track possibilities like “Actually, it’s been more helpful than it looks”, it looks more like her attempt(s) so far have failed, and this implies that she’s missing something.
Do you have a model that gives any specific predictions about what it might be? Regardless of whether it’s worth the effort or whether doing so would lead to bad consequences in other ways, do you have a model that gives specific predictions of what it would take to convey to her the thing(s) she’s missing such that the conversation with her would go much more like you think it should, should you decide it to be worthwhile?
Would you be interested in hearing the predictions my models give?
I don’t have a gearsy model, no. All I’ve got is the observations that:
Duncan’s post objects to a cluster of things X, Y, and Z
Jennifer’s response seems to me to state that X, Y, and Z are either not worth objecting to or possibly are actually good
Jennifer’s response exhibits X, Y, and Z in substantial quantity (which, to be fair, is consistent with principled disagreement, i.e. is not a sign of hypocrisy or lack-of-skill or whatever)
Duncan’s objections to X, Y, and Z within Jennifer’s pushback are basically falling on deaf ears, resulting in Jennifer adding more X, Y, and Z in subsequent responses
As is to be expected, given that the whole motivation for the OP was “LessWrong keeps indulging in and upvoting X, Y, and Z,” Jennifer’s being upvoted.
I’m interested in hearing both your model and your predictions. Perhaps a timescale of days-weeks is better than a timescale of hours-days.
There’s a lot here, and I’ve put in a lot of work writing and rewriting. After failing for long enough to put things in a way that is both succinct and clear, I’m going to abandon hopes of the latter and go all in on the former. I’m going to use the minimal handles for the concepts I refer to, in a way similar to using LW jargon like “steelman” without the accompanying essays, in hopes that the terms are descriptive enough on their own. If this ends up being too opaque, I can explicate as needed later.
Here’s an oversimplified model to play with:
Changing minds requires attention, and bigger changes require more attentions.
Bidding for bigger attention requires bigger respect, or else no reason to follow.
Bidding for bigger respect requires bigger security, or else not safe enough to risk following.
Bidding for that sense of security requires proof of actual security, or else people react defensively, cooperation isn’t attended to, and good things don’t happen
GWS took an approach of offering proof of security and making fairly modest bids for both security and respect. As a result, the message was accepted, but it was fairly restrained in what it attempted to communicate. For example, GWS explicitly says “I do not expect that I would give you the type of feedback that Jennifer has given you here (i.e. the question the validity of your thesis variety).”
Jennifer, on the other hand, went full bore, commanding attention to places which demand lots of respect if they are to be followed, while offering little in return*. As a result, accepting this bid also requires a large degree of security, and she offered no proof that her attacks on Duncan’s ideas (it feels weird addressing you in the third person given that I am addressing this primarily to you, but it seems like it’s better looked at from an outside perspective?) would be limited to that which wouldn’t harm Duncan’s social standing here. This makes the whole bid very hard to accept, and so it was not accepted, and Duncan gave high heat responses instead.
Bolder bids like that make for much quicker work when accepted, so there is good reason to be as bold as your credit allows. One complicating factor here is that the audience is mixed, and overbidding for Duncan himself doesn’t necessarily mean the message doesn’t get through to others, so there is a trade off here between “Stay sufficiently non-threatening to maintain an open channel of cooperation with Duncan” and “Credibly convey the serious problems with Duncan’s thesis, as I see them, to all those willing to follow”.
Later, she talks about wanting to help Duncan specifically, and doesn’t seem to have done so. There are a few possible explanations for this.
1) When she said it, there might have been an implied “[I’m only going to put in a certain level of work to make things easy to hear, and beyond that I’m willing to fail]”. In this branch, the conversation between Duncan and Jennifer is going nowhere unless Duncan decides to accept at least the first bid of security. If Duncan responds without heat (and feeling heated but attempting to screen it off doesn’t count), the negotiation can pick up on the topic of whether Jennifer is worthy of that level of respect, or further up if that is granted too.
2) It’s possible that she lacks a good and salient picture of what it looks like to recover from over-bidding, and just doesn’t have a map to follow. In this branch, demonstrating what that might look like would likely result in her doing it and recovering things. In particular, this means pacing Duncan’s objections without (necessarily) agreeing with them until Duncan feels that she has passed his ITT and trusts her intent to cooperate and collaborate rather than to tear him down.
3) It could also be that she’s got her own little hang up on the issue of “respect”, which caused a blind spot here. I put an asterisk there earlier, because she was only showing “little respect” in one sense, while showing a lot in another. If you say to someone “Lol, your ideas are dumb”, it’s not showing a lot of respect for those ideas of theirs. To the extent that they afford those same ideas a lot of respect, it sounds a lot like not respecting them, since you’re also shitting on their idea of how valuable those ideas are and therefore their judgement itself. However, if you say to someone “Lol, your ideas are dumb” because you expect them to be able to handle such overt criticism and either agree or prove you wrong, then it is only tentatively disrespectful of those ideas and exceptionally and unusually respectful of the person themselves.
She explicitly points at this when she says “Duncan is a special case. I’m not treating him like a student, I’m treating him like an equal”, and then hints at a blind spot when she says (emphasis her own) “who should be able to manage himself and his own emotions”—translating to my model, “manage himself and his emotions” means finding security and engaging with the rest of the bids on their own merits unobstructed by defensive heat. “Should” often points at a willful refusal to update ones map to what “is”, and instead responding to it by flinching at what isn’t as it “should” be. This isn’t necessarily a mistake (in the same way that flinching away from a hot stove isn’t a mistake), and while she does make other related comments elsewhere in the thread, there’s no clear indication of whether this is a mistake or a deliberate decision to limit her level of effort there. If it is a mistake, then it’s likely “I don’t like having to admit that people don’t demonstrate as much security as I think they should, and I don’t wanna admit that it’s a thing that is going to stay real and problematic even when I flinch at it”. Another prediction is that to the extent that it is this, and she reads this comment, this error will go away.
I don’t want to confuse my personal impression with the conditional predictions of the model itself, but I do think it’s worth noting that I personally would grant the bid for respect. Last time I laughed off something that she didn’t agree should be laughed off, it took me about five years to realize that I was wrong. Oops.
Just checking, what are X, Y and Z?
(I’m interested in a concrete answer but would be happy with a brief vague answer too!)
(Added: Please don’t feel obliged to write a long explanation here just because I asked, I really just wanted to ask a small question.)
The same stuff that’s outlined in the post, both up at the top where I list things my brain tries to do, and down at the bottom where I say “just the basics, consistently done.”
Regenerating the list again:
Engaging in, and tolerating/applauding those who engage in:
Strawmanning (misrepresenting others’ points as weaker or more extreme than they are)
Projection (speaking as if you know what’s going on inside other people’s heads)
Putting little to no effort into distinguishing your observations from your inferences/speaking as if things definitely are what they seem to you to be
Only having or tracking a single hypothesis/giving no signal that there is more than one explanation possible for what you’ve observed
Overstating the strength of your claims
Being much quieter in one’s updates and oopses than one was in one’s bold wrongness
Weaponizing equivocation/doing motte-and-bailey
Generally, doing things which make it harder rather than easier for people to see clearly and think clearly and engage with your argument and move toward the truth
This is not an exhaustive list.
Mechanistically… since stag hunt is in the title of the post… it seems like you’re saying that any one person committing “enough of these epistemic sins to count as playing stag” would mean that all of lesswrong fails at the stag hunt, right?
And it might be the case that a single person playing stag could be made up of them failing at even just a single one of these sins? (This is the weakest point in my mechanistic model, perhaps?)
Also, what you’re calling “projection” there is not the standard model of projection I think? And my understanding is that the standard model of projection is sort of explicitly something people can’t choose not to do, by default. In the standard model of projection it takes a lot of emotional and intellectual work for a person to realize that they are blaming others for problems that are really inside themselves :-(
(For myself, I try not to assume I even know what’s happening in my own head, because experimentally, it seems like humans in general lack high quality introspective access to their own behavior and cognition.)
The practical upshot here, to me, is that if the models you’re advocating here are true, then it seems to me like lesswrong will inevitably fail at “hunting stags”.
...
And yet it also seems like you’re exhorting people to stop committing these sins and exhorting them moreover to punitively downvote people according to these standards because if LW voters become extremely judgemental like this then… maybe we will eventually all play stag and thus eventually, as a group, catch a stag?
So under the models that you seem to me to have offered, the (numerous individual) costs won’t buy any (group) benefits? I think?
There will always inevitably be a fly in the ointment… a grain of sand in the chip fab… a student among the masters… and so the stag hunt will always fail unless it occurs in extreme isolation with a very small number of moving parts of very high quality?
And yet lesswrong will hopefully always have an influx of new people who are imperfect, but learning and getting better!
And that’s (in my book) quite good… even if it means we will always fail at hunting stags.
...
The thing I think that’s good about lesswrong has almost nothing to do with bringing down a stag on this actual website.
Instead, the thing I think is good about lesswrong has to do with creating a stable pipeline of friendly people who are all, over time, are getting a little better at thinking, so they can “do more good thinking” in their lives, and businesses, and non-profits, and perhaps from within government offices, and so on.
I’m (I hope) realistically hoping for lots of little improvements, in relative isolation, based on cross-fertilization among cool people, with tolerance for error, and sharing of ideas, and polishing stuff over time… Not from one big leap based on purified perfect cooperation (which is impossible anyway for large groups).
You’re against “engaging in, and tolerating/applauding” lots and lots of stuff, while I think that most of the actual goodness arises specifically from our tolerant engagement of people making incremental progress, and giving them applause for any such incremental improvements, despite our numerous inevitable imperfections.
Am I missing something? What?
I am confused by a theme in your comments. You have repeatedly chosen to express that the failure of a single person completely destroys all the value of the website, even going so far as to quote ridiculous numbers (at the order of E-18 [1]) in support of this.
The only model I have for your behavior that explains why you would do this, instead of assuming something like Duncan believing something like “The value of C cooperators and D defectors is min(0,C−D2)” is that you are trying to make the argument look weak. If there is another reason to do this, I’d appreciate an explanation, because this tactic alone is enough to make me view the argument as likely adversarial.
No, and if you had stopped there and let me answer rather than going on to write hundreds of words based on your misconception, I would have found it more credible that you actually wanted to engage with me and converge on something, rather than that you just really wanted to keep spamming misrepresentations of my point in the form of questions.
Epistemic status: socially brusque wild speculation. If they’re in the area and it wouldn’t be high effort, I’d like JenniferRM’s feedback on how close I am.
My model of JenniferRM isn’t of someone who wants to spam misrepresentations in the form of questions. In response to Dweomite’s comment below, they say:
My model of the model which which outputs words like these is that they’re very confident in their own understanding—viewing themself as a “teacher” rather than a student—and are trying to lead someone who they think doesn’t understand by the nose through a conversation which has been plotted out in advance.
Plausible to me. (Thanks.)