I will say that the EA Hotel, during my 7 months of living there, was remarkably non-cult-like. You would think otherwise given Greg’s forceful, charismatic presence /j
agrippa
[1] I don’t particularly blame them, consider the alternative.
I think the alternative is actually much better than silence!
For example I think the EA Hotel is great and that many “in the know” think it is not so great. I think that the little those in the know have surfaced about their beliefs has been very valuable information to the EA Hotel and to the community. I wish that more would be surfaced.
Simply put, if you are actually trying to make a good org, being silently blackballed by those “in the know” is actually not so fun. Of course there are other considerations, such as backlash, but IDK I think transparency is good on all sorts of angles. The opinions of those “in the know” matter; they lead, and I think its better for everyone if that leadership happens in the light.
Another thing to do, of course, would be to just do some amounts of evaluation and auditing of all these efforts, above and beyond what even those currently “in the know” have.
I think this is more than warranted at this point, yeah. I wonder who might be trusted enough to lead something like that.
“If you apply to this grant, and get turned down, we’ll write about why we don’t like it publically for everyone to see.”
I feel confident that Greg of EA Hotel would very much prefer this in the case of EA Hotel. It can be optional, maybe.
Thank you SO MUCH for writing this.
The case Zoe recounts of someone “having a psychotic break” sounds tame relative to what I’m familiar with. Someone can mentally explore strange metaphysics, e.g. a different relation to time or God, in a supportive social environment where people can offer them informational and material assistance, and help reality-check their ideas.
I think this is so well put and important.
I think that your fear of extreme rebuke from publishing this stuff is obviously reasonable when dealing with a group that believes itself to be world-saving. Any such org is going to need to proactively combat this fear if they want people to speak out. To me this is totally obvious.
Leverage was an especially legible organization, with a relatively clear interior/exterior distinction, while CFAR was less legible, having a set of events that different people were invited to, and many conversations including people not part of the organization. Hence, it is easier to attribute organizational responsibility at Leverage than around MIRI/CFAR. (This diffusion of responsibility, of course, doesn’t help when there are actual crises, mental health or otherwise.)
I feel that this is a very important point.
I want to hear more experiences like yours. That’s not “I want to hear them [before I draw conclusions].” I just want to hear them. I think this stuff should be known.
I think most of LW believes we should not risk ostracizing a group (with respect to the rest of the world) that might save the world, by publicizing a few broken eggs. If that’s the case, much discussion is completely moot. I personally kinda think that the world’s best shot is the one where MIRI/CFAR type orgs don’t break so many eggs. And I think transparency is the only realistic mechanism for course correction.
I think that smart people can hack LW norms and propagandize / pointscore / accumulate power with relative ease. I think this post is pretty much an example of that:
- a lot of time is spent gesturing / sermoning about the importance of fighting biases etc. with no particularly informative or novel content (it is after all intended to “remind people of why they care”.). I personally find it difficult to engage critically with this kind of high volume and low density.
- ultimately the intent seems to be an effort to coordinate power against types of posters that Duncan doesn’t likeI just don’t see how most of this post is supposed to help me be more rational. The droning on makes it harder to engage as an adversary, than if the post were just “here are my terrible ideas”, but it does so in an arational way.
I bring this up in part because Duncan seems to be advocating that his adherence to LW norms means he can’t just propagandize etc.
If you read the OP and do not choose to let your brain project all over it, what you see is, straightforwardly, a mass of claims about how I feel, how I think, what I believe, and what I think should be the case.
I explicitly underscore that I think little details matter, and second-to-second stuff counts, so if you’re going to dismiss all of the “I” statements as being mere window dressing or something (I’m not sure that’s what you’re doing, but it seems like something like that is necessary, to pretend that they weren’t omnipresent in what I wrote), you need to do so explicitly. You need to argue for them not-mattering; you can’t just jump straight to ignoring them, and pretending that I was propagandizing.
If people here really think you can’t propagandize or bad-faith accumulate points/power while adhering to LW norms, well, I think that’s bad for rationality.
I am sure that Duncan will be dissatisfied with this response because it does not engage directly with his models or engage very thoroughly by providing examples from the text etc. I’m not doing this stuff because I just don’t actually think it serves rationality to do so.
While I’m at it:
Duncan:
I’m not trying to cause appeals-to-emotion to disappear. I’m not trying to cause strong feelings oriented on one’s values to be outlawed. I’m trying to cause people to run checks, and to not sacrifice their long-term goals for the sake of short-term point-scoring.
To me it seems really obvious that if I said to Duncan in response to something, “you are just sacrificing long-term goals for the sake of short-term point-scoring”, (if he chose to respond) he would write about how I am making a bald assertion and blah blah blah. How I should retract it and instead say “it feels to me you are [...]” and blah blah blah. But look, in this quote there is a very clear and “uncited” / non-evidentiated claim that people are sacrifiing their long-term goals for the sake of short-term point-scoring. I am not saying it’s bad to make such assertions, just saying that Duncan can and does make such assertions baldly while adhering to norms.
To zoom out, I feel in the OP and in this thread Duncan is enforcing norms that he is good at leveraging but that don’t actually protect rationality. But these norms seem to have buy in. Pooey!
I continuously add more to this stupid post in part because I feel the norms here require a lot of ink gets spilled and that I substantiate everything I say. It’s not enough to just say “you know it seems like you are doing [x thing I find obvious]”. Duncan is really good at enforcing this norm and adhering to it.
But the fact is that this post was a stupid usage of my time that I don’t actually value having written, completely independent of how right I am about anything I am saying or how persuasive.Again I submit:
I explicitly underscore that I think little details matter, and second-to-second stuff counts, so if you’re going to dismiss all of the “I” statements as being mere window dressing or something (I’m not sure that’s what you’re doing, but it seems like something like that is necessary, to pretend that they weren’t omnipresent in what I wrote), you need to do so explicitly. You need to argue for them not-mattering; you can’t just jump straight to ignoring them, and pretending that I was propagandizing.
Look, if I have to reply to every single attack on a certain premise, before I am allowed to use this premise, then I am not going to be allowed to use the premise ever. Because Duncan has more time than me allocated to this stuff, and seemingly more than most people who criticize this OP. But that seems like a really stupid norm.
I made this top level because, even though I think the norm is stupid, among other norms I have pointed out, I also think that Duncan is right that all of them are in fact the norm here.
If you do happen to feel like listing a couple of underappreciated norms that you think do protect rationality, I would like that.
Brevity
Maybe it is good to clarify: I’m not really convinced that LW norms are particularly conducive to bad faith or psychopathic behavior. Maybe there are some patches to apply. But mostly I am concerned about naivety. LW norms aren’t enough to make truth win and bullies / predators lose. If people think they are, that alone is a problem independent of possible improvements.
since you might just have different solutions in mind for the same problem.
I think that Duncan is concerned about prejudicial mobs being too effective and I am concerned about systematically preventing information about abuse from surfacing. To some extent I do just see this as a conflict based on interests—Duncan is concerned about the threat of being mobbed and advocating tradeoffs accordingly, I’m concerned about being abused / my friends being abused and advocating tradeoffs accordingly. But to me it doesn’t seem like LW is particularly afflicted by prejudicial mobs and is nonzero afflicted by abuse.
I don’t think Duncan acknowledges the presence of tradeoffs here but IMO there absolutely have to be tradeoffs. To me the generally upvoted and accepted responses to jessicata’s post are making a tradeoff to protect MIRI against mudslinging, disinformation, mobbing while also making it scarier to try to speak up about abuse. Maybe the right tradeoff is being made and we have to really come down on jessicata for being too vague and equivocating too much, or being a fake victim of some kind. But I also think we should not take advocacy regarding these tradeoffs at face value, which yeah LW norms seem to really encourage.
Your OP is way too long (or not sufficiently indexed) for me to, without considerable strain, determine how much or how meaningfully I think this claim is true. Relatedly I don’t know what you are referring to here.
Maybe there is some norm everyone agrees with that you should not have to distance yourself from your friends if they turn out to be abusers, or not have to be open about the fact you were there friend, or something. Maybe people are worried about the chilling effects of that.
If this norm is the case, then imo it is better enforced explicitly.
But to put it really simply it does seem like I should care about whether it is true that Duncan and Brent were close friends if I am gonna be taking advice from him about how to interpret and discuss accusations made in the community. So if we are not enforcing a norm that such relationships should not enter discussion then I am unclear about the basis of downvoting here.
I was not aware of any examples of anything anyone would refer to as prejudicial mobbing with consequences. I’d be curious to hear about your prejudicial mobbing experience.
Great, thanks.
I found this post persuasive, and only noticed after the fact that I wasn’t clear on exactly what it had persuaded me of.
I want to affirm that this to me seems like it should be alarming to you. To me a big part of rationality is about being resilient to this phenomenon and a big part of successful rationality norms is banning the tools for producing this phenomenon.
Related post: https://www.lesswrong.com/posts/ybQdaN3RGvC685DZX/the-emh-is-false-specific-strong-evidence
One relevant thing here is baseline P(beats market) on given [rat / smart] & [tries to beat market]. In my own anecdotal dataset of about 15 people the probability here is about 100%, and the amount of wealth among these people is also really high. Obvious selection effects or whatever are obvious. But EMH is just a heuristic and you probably have access to stronger evidence.
I actually feel calmer after reading this, thanks. It’s nice to be frank.
For all the handwringing in comments about whether somebody might find this post demotivating, I wonder if there are any such people. It seems to me like reframing a task from something that is not in your control (saving the world) to something that is (dying with personal dignity) is the exact kind of reframing that people find much more motivating.
I’m interestd in working on dying with dignity
Thanks a lot for doing this and posting about your experience. I definitely think that nonviolent resistance is a weirdly neglected approach. “mainstream” EA certainly seems against it. I am glad you are getting results and not even that surprised.
You may be interested in discussion here, I made a similar post after meeting yet another AI capabilities researcher at FTX’s EA Fellowship (she was a guest, not a fellow): https://forum.effectivealtruism.org/posts/qjsWZJWcvj3ug5Xja/agrippa-s-shortform?commentId=SP7AQahEpy2PBr4XS
So the first step to good outreach is not treating AI capabilities researchers as the enemy. We need to view them as our future allies, and gently win them over to our side by the force of good arguments that meets them where they’re at, in a spirit of pedagogy and truth-seeking.
To this effect I have advocated that we should call it “Different Altruism” instead of “Effective Altruism”, because by leading with the idea that a movement involves doing altruism better than status quo, we are going to trigger and alienate people part of status quo that we could have instead won over by being friendly and gentle.
I often imagine a world where we had ended up with a less aggressive and impolite name attached to our arguments. I mean, think about how virality works: making every single AI researcher even slightly more resistant to engaging your movement (by priming them to be defensive) is going to have massive impact on the probability of ever reaching critical mass.
Open tolerance of the people involved with status quo and fear of alienating / making enemies of powerful groups is a core part of current EA culture! Steve’s top comment on this post is an example of enforcing/reiterating this norm.
It’s an unwritten rule that seems very strongly enforced yet never really explicitly acknowledged, much less discussed. People were shadow blacklisted by CEA from the Covid documentary they funded for being too disrespectful in their speech re: how governments have handled covid. That fits what I’d consider a taboo, something any socially savvy person would pick up on and internalize if they were around it.
Maybe this norm for open tolerance is downstream of the implications of truly considering some people to be your adversaries (which you might do if you thought delaying AI development by even an hour was a considerable moral victory, as the OP seems to). Doing so does expose you to danger. I would point out that while lc’s post analogizes their relationship with AI researchers to Isreal’s relationship with Iran. When I think of Israel’s resistance to Iran nonviolence is not the first thing that comes to mind.
Re: taboos in EA, I think it would be good if somebody who downvoted this comment said why.
I’m a little late on this one but for another clear example is that theists don’t have the relationship with death that you would expect someone to have if they believed that post-death was the good part. “You want me to apologize to the bereaved family for murder? They should be thanking me!”