Associate yourself with people whom you can confidently and cheerfully outperform the Nash Equilibrium with.
I’ll review and think more carefully later — out at dinner with a friend now — but my quick thought is that the proper venue, time, and place for expressing discontent with a cooperative community project is probably afterwards, possibly beforehand, and certainly not during… I don’t believe in immunity from criticism, obviously, but I am against defection when one doesn’t agree with a choice of norms.
That’s the quick take, will review more closely later.
Hey—to preface—obviously I’m a great admirer of yours Kaj and I’ve been grateful to learn a lot from you, particularly in some of the exceptional research papers you’ve shared with me.
With that said, of course your emotions are your own but in terms of group ethics and standards, I’m very much in disagreement.
The upset feels similar to what I’ve previously experienced when something that’s obviously a purely symbolic gesture is treated as a Big Important Thing That’s Actually Making A Difference.
On the one hand, you’re totally right. On the other hand, basically the entire world is made up of abstractions along these lines. What can the Supreme Court opinion in Marbury vs Madison be recognized as other than a purely symbolic gesture? Madison wasn’t going to deliver the commissions, Justice Marshall (no relation) knew that for sure, and he made a largely symbolic gesture in how he navigated the thing. It had no practical importance for a long time but now forms one of the foundations of American jurisprudence effecting, indirectly, billions of lives. But at the time, if you dig into the history, it really was largely symbolic at the time.
The world is built out of all sorts of abstract symbolism and intersubjective convention.
That by itself wouldn’t trigger the reaction; the world is full of purely symbolic gestures that are claiming to make a difference, but they mostly haven’t upset me in a long time. Some of the communication around Petrov Day has. I think it’s because of a sense that this idea is being pushed on people-that-I-care-about as something important despite not actually being in accordance to their values, and that there’s social pressure for people to be quiet about it and give in to the social pressure at a cost to their epistemics.
Canonical reply is this one:
(“Canonical” was intentionally chosen, incidentally.)
I feel like Oliver’s comment is basically saying “people should have taken this seriously and people who treat this light-heartedly are in the wrong”. It’s spoken from a position of authority, and feels like it’s shaming people whose main sin is that they aren’t particularly persuaded by this ritual actually being significant, as no compelling reason for this ritual actually being significant has ever been presented.
From Well-Kept Gardens:
In any case the light didn’t go on in my head about egalitarian instincts (instincts to prevent leaders from exercising power) killing online communities until just recently. [...] I have seen rationalist communities die because they trusted their moderators too little.
Honestly, for anything that wasn’t clearly egregiously wrong, I’d support the leadership team on here even if my feelings ran in a different direction. Like, leadership is hard. Really really really hard. If there was something I didn’t believe in, I’d just quietly opt out.
Now, I fully understand I’m in the minority on this position — but I’m against both ‘every interpretation is valid’ type thinking (why would every interpretation be valid as it relates to a group activity where your behavior effects the whole group?).
Likewise, pushing back against “shaming people whose main sin is that they aren’t particularly persuaded by this ritual actually being significant” — isn’t that actually both good and necessary if we want to be able to coordinate and actually solve problems?
There’s a dozen or so Yudkowsky citations about this. Here’s another:
Let’s say we have two groups of soldiers. In group 1, the privates are ignorant of tactics and strategy; only the sergeants know anything about tactics and only the officers know anything about strategy. In group 2, everyone at all levels knows all about tactics and strategy.Should we expect group 1 to defeat group 2, because group 1 will follow orders, while everyone in group 2 comes up with better ideas than whatever orders they were given?In this case I have to question how much group 2 really understands about military theory, because it is an elementary proposition that an uncoordinated mob gets slaughtered.
Let’s say we have two groups of soldiers. In group 1, the privates are ignorant of tactics and strategy; only the sergeants know anything about tactics and only the officers know anything about strategy. In group 2, everyone at all levels knows all about tactics and strategy.
Should we expect group 1 to defeat group 2, because group 1 will follow orders, while everyone in group 2 comes up with better ideas than whatever orders they were given?
In this case I have to question how much group 2 really understands about military theory, because it is an elementary proposition that an uncoordinated mob gets slaughtered.
Now it may be the case—a more agreeable part of me wants to interject—that this ritual actually is important, and that it should be treated as more than just a game.But.If so, I have never seen a particularly strong case being made for it.
Now it may be the case—a more agreeable part of me wants to interject—that this ritual actually is important, and that it should be treated as more than just a game.
If so, I have never seen a particularly strong case being made for it.
I made that case last year extensively:
I even did, like, math and stuff. The “shut up and multiply” thing.
Long story short — I think shared trust and demonstrated cooperation are super valuable, good leadership is incredibly underappreciated, and whimsical defection is really bad.
Again though — all written respectfully, etc etc, and I know I’m in the minority position here in terms of many subjective personal values, especially harm/care and seriousness/fun.
Finally, it’s undoubtedly true my estimate of the potential utility of building out a base of successfully navigated low-stakes cooperative endeavors is undoubtedly multiple orders of magnitude higher than others. I put the dollar-value of that as, actually, pretty high. Reasonable minds can differ on many of these points, but that’s my logic.
Ah, I see, I read the original version partially wrong, my mistake. We’re in agreement. Regards.
Hmm. Appreciate your reply. I think there’s a subtle difference here, let me think about it some.
Thrashing it out a bit more, I do think a lot of semi-artificial situations are predictive of future behavior.
Actually, to use an obviously extreme example that doesn’t universally apply, that’s more-or-less the theory behind the various Special Forces selection procedures —
As opposed to someone artificially creating a conflict to see how the other party navigates it — which I’m not at all a fan of — I think exercises in shared trust have both predictive value for future behavior and build good team cohesion when overcome.
I’d be interested to hear various participants’ and observers’ takes on the actual impact of this event
Me too, but I’d ideally want the data captured semi-anonymously. Most people, especially effective people, won’t comment publicly “I think this is despicable and have incremented downwards various confidences in people as a result” whereas the “aww it’s ok, no big deal” position is much more easily vocalized.
(Personally, I’m trying to tone down that type of vocalization myself. It’s unproductive on an individual level — it makes people dislike you for minimal gain. But I speculate that the absence of that level of dialogue and expression of genuine sentiment potentially leads to evaporative cooling of people who believe in teamwork, mission, mutual trust, etc.)
Reasonable minds can differ on this and related points, of course. And I’m very aware my values diverge a bit from many here, again around stuff like seriousness/camaraderie/cohesion/intensity/harm-vs-care/self-expression/defection/etc.
Great comment. Insightful phrasing, examples, and takeaways. Thank you.
Two thoughts —
(1) Some sort of polling or surveying might be useful. In the Public Goods Game, researchers rigorously check whether participants understand the game and its consequences before including them in datasets. It’s quite possible that there’s incredibly divergent understandings of Petrov Day among the user population. Some sort of surveying would be useful to understand that, as well as things like people’s sentiments towards unilateralist action, trust, etc no? It’d be self-reported data but it’d be better than nothing.
(2) I wonder how Petrov Day setup and engagement would change if the site went down for a month as a consequence.
Interesting thought yeah.
My first guess is there’s some overlap but it’s slightly orthogonal — btw, it might not have come across in original post, but Butler is a really well-loved teammate who is happy to defer to other guys on his team, set them up for success, etc. He doesn’t need to be “the guy” any given night — he just wants his team to win with a rather extreme fervor about it.
I honestly don’t get it—do you have a link to the previous discussion that justified why anyone’s taking it all that seriously?
Here was my analysis last year —
In fairness, my values diverge pretty substantially from a lot of the community here, particularly around “life is serious” vs “life isn’t very serious” and the value of abstract bonds/ties/loyalties/camaraderie.
You’re being very kind in far-mode consequentialism here, but come on now.
Making your friend look foolish in front of thousands of people is bad etiquette in most social circles.
Why would there be?
Different social norms, I suppose.
I’m trying to think if we ever prank each other or socially engineer each other in my social circle, and the answer is yes but it’s always by doing something really cool — like, an ambiguous package shows up but there’s a thoughtful gift inside.
(Not necessarily expensive — a friend found a textbook on Soviet accounting for me, I got him a hardcover copy of Junichi Saga’s Memories of Silk and Straw. Getting each other nice tea, coffee, soap, sometimes putting it in a funny box so it doesn’t look like what it is. Stuff like that. Sometimes nicer stuff, but it’s not about the money.)
Then I’m trying to think how my circle in general would respond to no-permission-given out-of-scope pranking of someone’s real life community that they’re member of — and yeah, there’d be pretty severe consequences in my social circle if someone did that. If I heard someone did what your buddy did who was currently a friend or acquaintance, they’d be marked as someone incredibly discourteous and much less trustworthy. It would just get marked as… pointless rude destructive behavior.
And it’s pretty tech heavy btw, we do joke around a lot, it’s just when we do pranks it’s almost always at the end a gift or something uplifting.
I don’t mean this to be blunt btw, I just re-read it before posting and it reads more blunt than I meant it to — I was just running through whether this would happen in my social circle, I ran it out mentally, and this is what I came up with.
Obviously, everyone’s different. And that’s of course one of the reasons it’s hard for people to get along. Some sort of meta-lesson, I suppose.
Umm. Grudgingly upvoted.
(For real though, respect for taking the time to write an after-action report of your thinking.)
I was tricked by one of my friends:
Serious question—will there be any consequences for your friendship, you think?
It’d take a few paragraphs to tell the whole story if you don’t already follow basketball, but this —
Long story really short, the 76ers have a player who is an incredible athlete but doesn’t feel comfortable taking jump shots far away from the basketball hoop.
Thus, defenses can ignore him when he’s out on the perimeter.
His coach told him publicly to take one 3-point shot per game. Coach said he doesn’t even care if he hits it or not.
The player basically refused to do it.
It’s more detailed than that, but the 80⁄20 is a young incredible athlete with immense potential on the team refused to follow his coach’s (incredibly reasonable) instruction.
In most sports and at most levels of play in sports, that’d get you benched by the coach.
But in the NBA, when a coach and star player feud, the coach gets fired around 9 times out of 10. (The other time, the star player gets traded. But the coach usually gets fired first in the NBA.)
So, I think it’s important that LessWrong admins do not get to unilaterally decide that You Are Now Playing a Game With Your Reputation.
Dude, we’re all always playing games with our reputations. That’s, like, what reputation is.
And good for Habyka for saying he feels disappointment at the lack of thoughtfulness and reflection, it’s very much not just permitted but almost mandated by the founder of this place —
Here’s the relevant citation from Well-Kept Gardens:
I confess, for a while I didn’t even understand why communities had such trouble defending themselves—I thought it was pure naivete. It didn’t occur to me that it was an egalitarian instinct to prevent chieftains from getting too much power.
I have seen rationalist communities die because they trusted their moderators too little.
Let’s give Habryka a little more respect, eh? Disappointment is a perfectly valid thing to be experiencing and he’s certainly conveying it quite mildly and graciously. Admins here did a hell of a job resurrecting this place back from the dead, to express very mild disapproval at a lack of thoughtfulness during a community event is....… well that seems very much on-mission, at least according to Yudkowsky.
Y’know, there was a post I thought about writing up, but then I was going to not bother to write it up, but I saw your comment here H and “high level of disappointment reading this response”… and so I wrote it up.
Here you go:
That’s an extreme-ish example, but I think the general principle holds to some extent in many places.
Yeah, I have first-pass intuitions but I genuinely don’t know.
In a era with both more trustworthy scholarship (replication crisis, etc) and less polarization, I think this would actually be an amazing topic for a variety of longitudinal studies.
Alas, probably not possible right now.
Respectfully — and I do mean this respectfully — I think you’re talking completely past Jacob and missed his point.
You comment starts:
How much your life is determined by your actions, and how much by forces beyond your control, that is an empirical question. You seem to believe it’s mostly your actions.
But Jacob didn’t say that.
You’re inferring something he didn’t say — actually, you’re inferring something that he explicitly disclaimed against.
Here’s the opening of his piece right after the preface; it’s more-or-less his thesis:
What’s bad about victim mentality? Most obviously, inhabiting a narrative where the world has committed a great injustice against which you are helpless against is extremely distressing. Whether the narrative is justified or not, it causes suffering.
You made some other interesting points, but I don’t think he was trying to ascribe macro-causality to internal or external factors.
He was saying, simply, in 2020-USA he thinks you’ll get both (1) better practical outcomes and (2) better wellbeing if you eschew what he calls victim mentality.
He says it doesn’t apply universally (eg, Ancient Sparta).
And he might be right or he might be mistaken.
But that’s broadly what his point was.
You’re inferring something for whatever reason that isn’t what he said, and actually pretty much said he didn’t believe, and then you went from there.
Going through these now. I started with #3. It’s astoundingly interesting. Thank you.
Hmm. I’m having a hard time writing this clearly, but I wonder if you could get interesting results by:
Training on a wide range of notably excellent papers from “narrow-scoped” domains,
Training on a wide range of papers that explore “we found this worked in X field, and we’re now seeing if it also works in Y field” syntheses,
Then giving GPT-N prompts to synthesize narrow-scoped domains in which that hasn’t been done yet.
You’d get some nonsense, I imagine, but it would probably at least spit out plausible hypotheses for actual testing, eh?
By the way, wanted to say this caught my attention and I did this successfully recently on this question —
Combined probabilities were over 110%, so I went “No” on all candidates. Even with PredictIt’s 10% fee on winning, I was guaranteed to make a tiny bit on any outcome. If a candidate not on the list was chosen, I’d have made more.
My market investment came out to ($0.43) — that’s negative 43 cents; ie, no capital required to stay in it — on 65 no shares across the major candidates. (I’d have done more, but I don’t understand how the PredictIt $850 limit works yet and I didn’t want to wind up not being able to take all positions.)
I need to figure out how the $850 limit works in practice soon — is it 850 shares, $850 at risk, $850 max payout, or.....? Kinda unclear from their documentation, will do some research.
But yeah, it was fun and it works. Thanks for pointing this out.