I have not read all the words in this comment section, let alone in all the linked posts, let alone in their comments sections, but/and—it seems to me like there’s something wrong with a process that generates SO MANY WORDS from SO MANY PEOPLE and takes up SO MUCH PERSON-TIME for what is essentially two people not getting along. I get that an individual social conflict can be a microcosm of important broader dynamics, and I suspect that Duncan and/or Said might find my “not getting along” summary trivializing, which may even be true, as noted I haven’t read all the words—just, still, is this really the best thing for everyone involved to be doing with their time?
tcheasdfjkl
[Hammertime Final Exam] Accommodate Yourself; Kindness Is An Epistemic Virtue; Privileging the Future
This might be easier to see when you consider how, from an outside perspective, many behaviors of the Rationality community that are, in fact, fine might seem cultish. Consider, for example, the numerous group houses, hero-worship of Eliezer, the tendency among Rationalists to hang out only with other Rationalists, the literal take over the world plan (AI), the prevalence of unusual psychological techniques (e.g., rationality training, circling), and the large number of other unusual cultural practices that are common in this community. To the outside world, these are cult-like behaviors. They do not seem cultish to Rationalists because the Rationality community is a well-liked ingroup and not a distrusted outgroup.
I think there’s actually been a whole lot of discourse and thought about Are Rationalists A Cult, focusing on some of this same stuff? I think the most reasonable and true answers to this are generally along the lines of “the word ‘cult’ bundles together some weird but neutral stuff and some legitimately concerning stuff and some actually horrifying stuff, and rationalists-as-a-whole do some of the weird neutral stuff and occasionally (possibly more often than population baseline but not actually that often) veer into the legitimately concerning stuff and do not really do the actually horrifying stuff”. This post, as I read it, is making the case that Leverage veered far more strongly into the “legitimately concerning” region of cult-adjacent space, and perhaps made contact with “actually horrifying”-space.
Notably out of your examples, some are actually bad imo? “Hero-worship of Eliezer” is imo bad, and also happily is not really much of a thing in at least the parts of ratspace I hang out in; “the tendency of rationalists to hang out with only other rationalists” is I think also not great and I think if taken to an extreme would be a pretty worrying sign, but in fact most rationalists I know do maintain social ties (including close ones) outside this group.
Unusual rationalist psychological techniques span a pretty wide range, and I have sometimes heard descriptions of such techniques/practices/dynamics and been wary or alarmed, and talked to other rationalists who had similar reactions (which I say not to invoke the authority of an invisible crowd that agrees with me but to note that rationalists do sometimes have negative “immune” responses to practices invented by other rationalists even if they’re not associated with a specific disliked subgroup). Sort of similarly re: “take over the world plan”, I do not really know enough about any specific person or group’s AI-related aspirations to say how fair a summary that is, but… I think the more a fair summary it is, the more potentially worrying that is?
Which is to say, I do think that there are pretty neutral aspects of rationalist community (the group houses, the weird ingroup jargon, the enthusiasm for making everything a ritual) that may trip people’s “this makes me think of cults” flag but are not actually worrying, but I don’t think this means that rationalists should turn off their, uh, cult-detectors? Central-examples-of-cults do actually cause harm, and we do actually want to avoid those failure modes.
Yeah, I think the reason sexual abuse is wrong is because it has an unacceptably high risk of traumatizing someone, not because it always in all cases does. (Sort of like drunk driving.)
Since the day is drawing to a close and at this point I won’t get to do the thing I wanted to do, here are some scattered thoughts about this thing.
First, my plan upon obtaining the code was to immediately repeat Jeff’s offer. I was curious how many times we could iterate this; I had in fact found another person who was potentially interested in being another link in this chain (and who was also more interested in repeating the offer than nuking the site). I told Jeff this privately but didn’t want to post it publicly (reasons: thought it would be more fun if this was a surprise; didn’t think people should put that much weight on my claimed intentions anyway; thought it was valuable for the conversation to proceed as though nuking were the likely outcome).
(In the event that nobody took me up on the offer, I still wasn’t going to nuke the site.)
Other various thoughts:
Having talked to some people who take this exercise very seriously indeed and some who don’t understand why anyone takes it seriously at all, both perspectives make a lot of sense to me and yet I’m having trouble explaining either one to the other. Probably I should practice passing some ITTs.
Of the arguments raised against the trade the one that I am the most sympathetic to is TurnTrout’s argument that it’s actually very important to hold to the important principles even when there’s a naive utilitarian argument in favor of abandoning them. I agree very strongly with this idea.
But it also seems to me there’s a kind of… mixing levels here? The tradeoff here is between something symbolic and something very real. I think there’s a limit to the extent this is analogous to, like, “maintain a bright line against torture even when torture seems like the least bad choice”, which I think of as the canonical example of this idea.
(I realize some people made arguments that this symbolic thing is actually reflective or possibly determinative of probabilistic real consequences (in which case the “mixing levels” point above is wrong). (Possibly even the arguments that didn’t state this explicitly relied on the implication of this?) I guess I just…. don’t find that very persuasive, because, again, the extent to which this exercise is analogous to anything of real-world importance is pretty limited; the vast majority of people who would nuke LW for shits and giggles wouldn’t also nuke the world for shits and giggles. Rituals and intentional exercises like these have any power but I think I put less stock in them than some.)
Relatedly, I guess I feel like if the LW devs wanted me to take this more seriously they should’ve made it have actual stakes; having just the front page go down for just 24 hours is just not actually destroying something of real value. (I don’t mean to insult the devs or even the button project—I think this has been pretty great actually—it’s just great in more of a “this is a fun stunt/valuable discussion starter” way than a “oh shit this is a situation where trustworthiness and reliability matter” way. (I realize that doing this in a way that had stakes would have possibly been unacceptably risky; I don’t really know how to calibrate the stakes such that they both matter and are an acceptable risk.))
Nevertheless I am actually pleased that we’ve made it through (most of) the day without the site going down (even when someone posted (what they claim is) their code on Facebook).
I am more pleased than that about the discussions that have happened here. I think the discussions would have been less active and less good without a specific actual possible deal on the table, so I’m glad to have spurred a concrete proposal which I think helped pin down some discussion points that would have remained nebulous or just gone unsaid otherwise.
If in fact the probability of someone nuking the site is entangled with the probability of someone nuking the world (or similar), I think it’s much more likely that both share common causes than that one causes the other. If this is so, then gaining more information about where we stand is valuable even if it involves someone nuking the site (perhaps especially then?).
In general I think a more eventful Petrov Day is probably more valuable and informative than a less eventful one.
It will be tragic if REACH closes. Thanks for letting us know of the urgency of the need. Just doubled my pledge, though it’s still fairly small compared to the need.
...actually, let me increase it further, I think I was undervaluing how much I care about REACH existing.
Having finally read this—here are things I agree with:
there is such a thing as too much attention paid to small harms, and this can trap people in increasingly convoluted rules aimed at preventing harms of a magnitude smaller than the harm caused by the convoluted rules themselves; arguably it’s not even possible to prevent harms that small, and trying to do so is more harmful than just being okay with the notion that sometimes you may inevitably slightly harm someone you care about
this dynamic can break “we”-ness by creating a “fault” rather than “fault analysis” mindset—but not necessarily just by creating an adversarial dynamic—it is just as bad, possibly worse, if the individuals place the fault on themselves. I’ve been in this situation where I want to be able to say “I find this thing that happened slightly unpleasant, and I want to tell you about that because I want you to know about my experiences, and maybe we can think about whether this is easily preventable in the future, but if it isn’t that’s really okay” and this gets taken as “I’m sorry for hurting you I will do better” which is not at all what I was trying to go for and which makes things worse for me
implicit disagreement about burden of proof breaks discourse in predictable ways, and it is better to make this disagreement explicit
different worlds
some of the pro-sensitivity people in the vignettes are unreasonable (most notably Alexis, Elliott, maybe Harley). in Alexis’s case I even agree that the primary reason they are being unreasonable is because the possible harm is too small to matter
it is often a good change for some individuals to try and go about their lives paying less attention to whether they might accidentally make someone uncomfortable. it is a change I in particular have been trying to make, for one.
Disagreements:
The reason Elliott is being unreasonable is primarily not because the effect of Finley taking off their shirt is too small to matter (though that’s part of it), it’s primarily because what the hell, Finley’s body belongs to Finley and not to bystanders, Finley gets to choose what to do with it.
More broadly—this essay suffers from not having a concept of sovereignty, of what is YOURS. Your body is yours, so you should be able to choose whether you wear a shirt or not. -- BUT ALSO, your body is yours, so people shouldn’t punch it unless they have good reason to believe you want them to. -- I just don’t think these situations are really parallel, because to me the sovereignty question is huge. (and I say this as a more-or-less-utilitarian, even; I just think that in general people are happier with more sovereignty over their bodies.) -- Having to make sure you dress in a way that never upsets anybody is a huge burden. Having to make sure you never say something that accidentally makes someone uncomfortable is a huge burden too (though people should take this on to some extent, without going overboard). Having to just not punch people is not actually a huge burden.
The autistic meltdown is not micro what the hell. You mention that it is within the bounds of culturally accepted behavior, but… I don’t see how that’s particularly relevant to your point? You focus on “Kelly’s objection is specifically to the disregard of autonomy, so they presumably would have objected even to a small disregard of autonomy, so this was a small objection” but in fact I do not think this was a small disregard of autonomy at all! You don’t have to believe that five-year-olds should be able to make all choices autonomously to think that it’s not a good idea to drag them into a thing they are incredibly distressed about! And I do think that incredibly distressing experience + being forced into a thing one doesn’t want with no apparent weight put on one’s preferences is a particularly bad combination.
I don’t think I agree with you about the pendulum model being the way things usually go. Or rather, I think that with any new change for the better, some people will overapply it, but I don’t think that’s necessarily the dominant dynamic.
You’re right that the people who most needed any given change are maybe not the best equipped to see when the change has gone too far or hurt someone—but neither are the people who didn’t need the change well equipped to see whether the change has gone far enough to meet its goal. I don’t really know if there are any individuals well positioned to see both—maybe people who are at the intersection of different competing needs, such that they personally have to be sensitive to the tradeoffs involved? But generally I think this needs to be a collaborative effort with input from people with different kinds of experiences—which I think is what you say too, I just think you’re wrong to say that people most helped by the initial change are uniquely unsuited to see when it’s gone far enough.
anyway I guess the gist is that I agree with you that there exists a threshold of magnitude of harm such that harms below that magnitude should be mostly disregarded because trying to account for them creates more harm than it prevents; however, I think I disagree with you pretty strongly about where the threshold should be, and also about what kinds of actions/behavior are and aren’t reasonable to expect in the service of preventing harm.
[Link] Bay Area Winter Solstice 2023
I’m sort of surprised other people are surprised that bioethics is not uniformly trash. (This includes people on Facebook and elsewhere where this has come up.)
I know that bioethics has a terrible reputation around these parts and also know there do in fact exist lots of terrible bioethics takes (e.g. I want to personally fight the author of paper #31), but even though I had not previously actually looked at a sample of bioethics papers, I somewhat strongly suspected that rationalists who railed against bioethics were overgeneralizing.* It’s not impossible for an academic field to have such bad epistemic standards and Overton windows for that generalization to be accurate, and obviously the bioethics Overton window is different from the rationalist Overton window (and I mostly prefer the latter), but “these terrible takes are within the bioethics Overton window” is not very strong evidence for “these terrible takes are representative of bioethics as a whole”, and I would have been moderately surprised if it had turned out that all or even most of the takes were that flavor of terrible.
(Unfortunately I did not register this prior anywhere; I mostly did not try to argue with people about it because I had not actually looked at enough bioethics to be well informed about it or have strong arguments to make. I realize it’s kind of bad form for me to be like “I predicted this!!” when I did not say that anywhere, sorry. I don’t really want people to update on my correctness from this, anyway, my point is mostly that I think local discourse on this topic has been too unnuanced.)
*For that matter, sometimes people saying such things even agree when pressed that they’re overgeneralizing; there’s a sort of motte-and-bailey that I’ve seen (with both this and other examples) that’s like “bioethicists suck” “not all bioethicists” “well of course I don’t mean ALL, I mean too many”. But apparently a community in which people generalize about bioethicists in this way is also a community in which people are surprised when a sample of bioethics papers is not uniformly trash?
(I guess that part might be kind of unfair of me since possibly the people who agreed they were overgeneralizing would have expected something like 80% of papers to be very terrible, in which case it’s both true that they’re overgeneralizing and that this actual sample is a notable update.)
Bay Winter Solstice 2023: Song & speech auditions
I want to express some strong appreciation for the post including not just some indicators that frame control is occurring but also some indicators that frame control is NOT occurring, and also for trying to mitigate the likelihood that this concept will be misused in the future. I also appreciate that the comment section is full of people absorbing the concept and also working to set bounds on it and make it safer. I appreciate the epistemic environment that gives rise to this kind of caution.
I’m now tempted to run such a survey of my own...
This post is a weird experience. It makes mostly reasonable claims but it’s aggressively objectifying-male-gaze-y in a really unpleasant way and I strongly feel that content with that property should not be on LW without at minimum content warnings to that effect (which in this case would, uh, need to come before the title somehow) but preferably at all.
(Trying to say more about that intuition:
it feels like it assumes the audience will be male (and having LW contain posts that are assumed-male-audience feels quite Unwelcoming (this word is overused but I think this is centrally what it’s for))
it feels like it sticks the reader straight into an objectifying frame without warning; I think warning/consent to engage with this frame is somewhat necessary here)
I think this problem might be largely due to automatic crossposting? I think it’s, like, okay for blogs with these properties to exist in the world (though I don’t want to read them) and I expect the blog itself provides enough “content warning” through context. But pulling the post onto LW by default seems bad.
I think ozy has written posts I’ve liked where they said a lot of similar stuff in a less intensely annoying way.
One fairly central reaction I had to this post is not so much about the specific phenomenon of frame control but rather about the general observation that it’s quite common for the aspects of an abusive situation that are worst to experience to NOT be the same as the aspects that are most clear-cut bad and easiest to convey objectively to another person.
This seems true; I have heard multiple people with objectively horrifying stories of abuse report that actually they don’t really care about the objectively awful parts that their friends are horrified about, but instead they are really fucked up by some stuff that’s much harder to convey. (Probably in some cases that’s the same general phenomenon described in this post and in other cases it’s some other interpersonal fuckery.)
I have also heard people report that they experienced a situation as abusive and NOT have any clear-cut objectively awful behavior to point to. It makes perfect sense that this would happen in some cases—because the abuser is savvy enough about what people will object to to avoid those things, or because the abuser is actually trying to be good by following the ethical rules they know but is not managing to also be good in less legible matters, or for some other reason.
...It is also my experience that when humans make not-fully-objective reports about the beliefs/behaviors/words of other humans they disagree with and/or have some kind of adversarial relationship with, it is extremely common for such subjective accounts to be distorted in some way. For this reason, when I hear about an accusation of wrongdoing, I usually try to zero in on the objective claims being made, because (assuming I basically trust that the reporter is intending to be truthful) those are much less likely to be distorted or interpreted through a lens I think is unreasonable.
But this means that it’s very hard for me to tell, as an outsider, when illegible wrongdoing has occurred. (I was going to say “illegible harm” but actually accusations of interpersonal wrongdoing are much stronger evidence of harm than of wrongdoing per se; I only need a very basic level of trust in someone’s honesty to conclude they were harmed by a situation they’re describing as abusive.) Indeed this feels kind of epistemically hopeless to ever evaluate from the outside?
I don’t really know what to do with this thought but it felt important to note.
What does “somatically aware” mean here?
Strong +1 to this—the pandemic sharply increased both some of the costs and some of the benefits of group housing.
My dream is for REACH to play this role. It already sorta does to some extent for people who have some money, but I want it to eventually have enough funding to afford to routinely offer this for free for people who need it. (Of course, funding is hard.)
I just did this exercise and came up with something like 150+ bugs—interestingly, none of them at difficulty level 1, I think because if a bug is actually easy to fix and something I’m able to notice then I’ve already fixed it by now. I also put priorities on the bugs, since it felt wrong to sort them only by difficulty when some are vital to fix and others I don’t really care about very much.
I really like easy bug fixes but can’t think of any that have been particularly strange per se. A couple maybe noteworthy ones:
I noticed that because I procrastinate mightily on getting out of bed in the morning, I sometimes stay in bed so long I get very hungry, which makes me have less energy, which makes me procrastinate even more, and so forth in a vicious cycle. I started keeping Clif bars in my bedside drawer to have a way out of this cycle. Now I also keep Clif bars in my various backpacks and purses so that if I get hungry while out and about I don’t have to spend effort thinking about where to get food.
I typically fidget a lot in somewhat destructive ways like picking at the skin on my fingers. Getting a fidget spinner has helped me fidget in less destructive ways.
I’ve made doing boring tasks more endurable by listening to podcasts while I do them
Sometimes I need to do a thing which feels too long and complicated to do in my current state (e.g. get ready for bed when my bed is a mess and I’m exhausted). I’ve found it’s really helpful to break the task down into chunks and tell somebody about each chunk as I do it (e.g. “ok now I’m going to take my meds, refill my waterbottle, and clear my bed”, “ok now I’m going to brush my teeth, use the bathroom, and take out my contacts”, etc.)
This is the third time in a few years (second time this year) that I’ve seen a beloved community institution try its futile best to raise enough funds to stay afloat, reach a point where it looks like it will have to close immediately, and suddenly successfully raise the money it had been trying and failing to get.
This makes sense—as a donor, I’m willing to pay substantially more if my contribution is likely to make a difference between a thing I care about existing and not. But this clearly creates a lot of stress for the organizers—if you won’t get the support you need until you’re almost out of business, you have to get right up to the edge before you can succeed, even though the resources you needed were there all along.
I wonder if this is something that can be improved on somehow.