I like this one: https://www.theferrett.com/2015/06/22/i-never-said-no/
Also, a cool feature of his posts is that they all come with notes like this at the top:
(NOTE: Based on time elapsed since the posting of this entry, the BS-o-meter calculates this is 8.442% likely to be something that Ferrett now regrets.)
What’s the content of belief reporting?
This might be easier to see when you consider how, from an outside perspective, many behaviors of the Rationality community that are, in fact, fine might seem cultish. Consider, for example, the numerous group houses, hero-worship of Eliezer, the tendency among Rationalists to hang out only with other Rationalists, the literal take over the world plan (AI), the prevalence of unusual psychological techniques (e.g., rationality training, circling), and the large number of other unusual cultural practices that are common in this community. To the outside world, these are cult-like behaviors. They do not seem cultish to Rationalists because the Rationality community is a well-liked ingroup and not a distrusted outgroup.
I think there’s actually been a whole lot of discourse and thought about Are Rationalists A Cult, focusing on some of this same stuff? I think the most reasonable and true answers to this are generally along the lines of “the word ‘cult’ bundles together some weird but neutral stuff and some legitimately concerning stuff and some actually horrifying stuff, and rationalists-as-a-whole do some of the weird neutral stuff and occasionally (possibly more often than population baseline but not actually that often) veer into the legitimately concerning stuff and do not really do the actually horrifying stuff”. This post, as I read it, is making the case that Leverage veered far more strongly into the “legitimately concerning” region of cult-adjacent space, and perhaps made contact with “actually horrifying”-space.
Notably out of your examples, some are actually bad imo? “Hero-worship of Eliezer” is imo bad, and also happily is not really much of a thing in at least the parts of ratspace I hang out in; “the tendency of rationalists to hang out with only other rationalists” is I think also not great and I think if taken to an extreme would be a pretty worrying sign, but in fact most rationalists I know do maintain social ties (including close ones) outside this group.
Unusual rationalist psychological techniques span a pretty wide range, and I have sometimes heard descriptions of such techniques/practices/dynamics and been wary or alarmed, and talked to other rationalists who had similar reactions (which I say not to invoke the authority of an invisible crowd that agrees with me but to note that rationalists do sometimes have negative “immune” responses to practices invented by other rationalists even if they’re not associated with a specific disliked subgroup). Sort of similarly re: “take over the world plan”, I do not really know enough about any specific person or group’s AI-related aspirations to say how fair a summary that is, but… I think the more a fair summary it is, the more potentially worrying that is?
Which is to say, I do think that there are pretty neutral aspects of rationalist community (the group houses, the weird ingroup jargon, the enthusiasm for making everything a ritual) that may trip people’s “this makes me think of cults” flag but are not actually worrying, but I don’t think this means that rationalists should turn off their, uh, cult-detectors? Central-examples-of-cults do actually cause harm, and we do actually want to avoid those failure modes.
Tentatively excited to read the rest of the sequence, though I think I would have gotten more out of this if I knew more about what your motivating examples of rationalists failing to coordinate are like. Would be interesting to hear about some examples if any are not too private/fraught to share.
I’m sort of surprised other people are surprised that bioethics is not uniformly trash. (This includes people on Facebook and elsewhere where this has come up.)
I know that bioethics has a terrible reputation around these parts and also know there do in fact exist lots of terrible bioethics takes (e.g. I want to personally fight the author of paper #31), but even though I had not previously actually looked at a sample of bioethics papers, I somewhat strongly suspected that rationalists who railed against bioethics were overgeneralizing.* It’s not impossible for an academic field to have such bad epistemic standards and Overton windows for that generalization to be accurate, and obviously the bioethics Overton window is different from the rationalist Overton window (and I mostly prefer the latter), but “these terrible takes are within the bioethics Overton window” is not very strong evidence for “these terrible takes are representative of bioethics as a whole”, and I would have been moderately surprised if it had turned out that all or even most of the takes were that flavor of terrible.
(Unfortunately I did not register this prior anywhere; I mostly did not try to argue with people about it because I had not actually looked at enough bioethics to be well informed about it or have strong arguments to make. I realize it’s kind of bad form for me to be like “I predicted this!!” when I did not say that anywhere, sorry. I don’t really want people to update on my correctness from this, anyway, my point is mostly that I think local discourse on this topic has been too unnuanced.)
*For that matter, sometimes people saying such things even agree when pressed that they’re overgeneralizing; there’s a sort of motte-and-bailey that I’ve seen (with both this and other examples) that’s like “bioethicists suck” “not all bioethicists” “well of course I don’t mean ALL, I mean too many”. But apparently a community in which people generalize about bioethicists in this way is also a community in which people are surprised when a sample of bioethics papers is not uniformly trash?
(I guess that part might be kind of unfair of me since possibly the people who agreed they were overgeneralizing would have expected something like 80% of papers to be very terrible, in which case it’s both true that they’re overgeneralizing and that this actual sample is a notable update.)
Ooh you’re right that survey data would be cool. I’m kind of wishing someone had thought to make a recurring survey (monthly?) that asks people what precautions they’re taking now.
I think the level of lockdown described here was very common “around here” in spring 2020 but at least in my corner of the community I think it was pretty uncommon to stick to that level of lockdown all year.
In the summer we learned that outdoors is pretty safe, and outdoor masked hangouts became common.
When the microcovid site was launched, lots of people soon started using it to plan human contact that was important to them. (People were sometimes doing that before too, but with much more difficulty and often much more cautiously.)
My own lockdown has been quite cautious but less severe than described here but I still would not say my mental health is any good, though.
Strong +1 to this—the pandemic sharply increased both some of the costs and some of the benefits of group housing.
Also there are people who can’t isolate (essential workers and such); I wouldn’t want to increase their risk willy-nilly.
This is interesting and compelling but I wish it had more examples. Most notably, at “this happens in the real world ALL THE TIME” my reaction was—I don’t feel like I encounter blackmail happening all the time in my experience of the world? I’m not sure if this is because blackmail is concentrated in specific parts of the world I’m not in (e.g. among famous people), or because I’m in a bubble of relative niceness, or because there is blackmail happening near me but I’m not noticing it, or because Zvi is wrong about blackmail happening all the time, or because he’s classifying some things as blackmail that I wouldn’t, or because he’s classifying some things as blackmail that I’m not thinking of but I’d agree if they were pointed out to me. Some examples would help distinguish these things.
Sounds pragmatically weird in the case where the person isn’t known to already be donating.
Can we have a recap from the mods of how Petrov Day went? How many people pressed the button, how many people tried entering anything in the launch code field, how many people tried the fake launch code posted on Facebook in particular?
Since the day is drawing to a close and at this point I won’t get to do the thing I wanted to do, here are some scattered thoughts about this thing.
First, my plan upon obtaining the code was to immediately repeat Jeff’s offer. I was curious how many times we could iterate this; I had in fact found another person who was potentially interested in being another link in this chain (and who was also more interested in repeating the offer than nuking the site). I told Jeff this privately but didn’t want to post it publicly (reasons: thought it would be more fun if this was a surprise; didn’t think people should put that much weight on my claimed intentions anyway; thought it was valuable for the conversation to proceed as though nuking were the likely outcome).
(In the event that nobody took me up on the offer, I still wasn’t going to nuke the site.)
Other various thoughts:
Having talked to some people who take this exercise very seriously indeed and some who don’t understand why anyone takes it seriously at all, both perspectives make a lot of sense to me and yet I’m having trouble explaining either one to the other. Probably I should practice passing some ITTs.
Of the arguments raised against the trade the one that I am the most sympathetic to is TurnTrout’s argument that it’s actually very important to hold to the important principles even when there’s a naive utilitarian argument in favor of abandoning them. I agree very strongly with this idea.
But it also seems to me there’s a kind of… mixing levels here? The tradeoff here is between something symbolic and something very real. I think there’s a limit to the extent this is analogous to, like, “maintain a bright line against torture even when torture seems like the least bad choice”, which I think of as the canonical example of this idea.
(I realize some people made arguments that this symbolic thing is actually reflective or possibly determinative of probabilistic real consequences (in which case the “mixing levels” point above is wrong). (Possibly even the arguments that didn’t state this explicitly relied on the implication of this?) I guess I just…. don’t find that very persuasive, because, again, the extent to which this exercise is analogous to anything of real-world importance is pretty limited; the vast majority of people who would nuke LW for shits and giggles wouldn’t also nuke the world for shits and giggles. Rituals and intentional exercises like these have any power but I think I put less stock in them than some.)
Relatedly, I guess I feel like if the LW devs wanted me to take this more seriously they should’ve made it have actual stakes; having just the front page go down for just 24 hours is just not actually destroying something of real value. (I don’t mean to insult the devs or even the button project—I think this has been pretty great actually—it’s just great in more of a “this is a fun stunt/valuable discussion starter” way than a “oh shit this is a situation where trustworthiness and reliability matter” way. (I realize that doing this in a way that had stakes would have possibly been unacceptably risky; I don’t really know how to calibrate the stakes such that they both matter and are an acceptable risk.))
Nevertheless I am actually pleased that we’ve made it through (most of) the day without the site going down (even when someone posted (what they claim is) their code on Facebook).
I am more pleased than that about the discussions that have happened here. I think the discussions would have been less active and less good without a specific actual possible deal on the table, so I’m glad to have spurred a concrete proposal which I think helped pin down some discussion points that would have remained nebulous or just gone unsaid otherwise.
If in fact the probability of someone nuking the site is entangled with the probability of someone nuking the world (or similar), I think it’s much more likely that both share common causes than that one causes the other. If this is so, then gaining more information about where we stand is valuable even if it involves someone nuking the site (perhaps especially then?).
In general I think a more eventful Petrov Day is probably more valuable and informative than a less eventful one.
I’m pretty sure it is? I had already decided on & committed to a donation amount for 2019, and this would be in addition to that. The lifesaving part is relevant insofar as I am happier about the prospect of this trade than I would be about paying the same amount to an individual.
The only way in which I could imagine this not being perfectly counterfactual is that given that discretionary spending choices depend some on my finances at any given point, and given that large purchases have some impact on my finances, it may be that if some other similar opportunity presented itself later on, my decision re: that opportunity could have some indirect causal connection to my current decision (not in the direct sense of “oh I already donated last month so I won’t now” but just in the sense of “hmm how much discretionary-spending money do I currently have and, given that, do I want to spend $X on Y”). I’m not sure it’s really ever possible to get rid of that though?
To be clear I am NOT looking for people to press the button, I am looking for people to give me launch codes.
I’ll note that giving someone the launch codes merely increases the chance of the homepage going down.
If someone else with codes wants to make this offer now that Jeff has withdrawn his, I’m now confident I am up for this.
this makes sense. I shall consider whether it makes sense for me to impulse-spend this amount of money on shenanigans (and lifesaving)
hey actually I’m potentially interested depending on what size of donation you would consider sufficient, can you give an estimate?