I’m running a small rationality dojo to try to approach this issue from the rat-for-rat-sake direction in a few weeks, trying to incorporate the things I learned from my Seasons of Growth, my Executive Function research, and stuff like Logan’s Naturalism sequence (not to mention years of teaching at rat camps and workshops). I plan to do a writeup after, but would also love to chat sometime about this, either before or after.
DaystarEld
FWIW I think my main takeaway here is that if you update at all on any point of untrustworthiness of the original sources, that update should propagate toward the rest of the points.
I think most brains are bad at this, naturally, and it’s just a hard thing to do without effort, which is why things like Gish gallops and character assassinations work even when debunked.
My secondary takeaway is that people should not update as hard as they do on people threatening to “retaliate” against social harm done to them unless the claims are very obviously true or the “retaliation” is very obviously false. If we don’t know if they’re true or not, then what the accuser feels is “retribution” will be felt by the accused as “justice,” and I think that both are natural feelings most people would have, but most people have not been publicly pilloried and so cannot connect as easily with the empathy for that position.
I also want to add that I think the community in general has shown a mild failure in treating the legal action threat as evidence of wrongdoing even if the lawsuit would ultimately fail.
It is really bad to treat a libel suit threat as some horrible thing that no one “innocent” would ever do. It’s a form of demonizing anyone who has ever used or thought to use the legal system defensively.
Which if intended, seems to be fundentally missing what the point of a legal system should be. It is no doubt a problem that people with lots of power, whether it’s fame or money or whatever, are more likely to win legal battles.
But it’s also a way more truth oriented process than the court of public opinion. And many people who would have stood 0 chance of getting justice without it have gotten some through it.
Do such threats have a chilling effect on criticism? Of course, and that’s a problem, particularly if they’re used too often or too quickly.
But the solution cannot be “no one makes such threats no matter what.” Because then there’s no recourse but the court of public opinion, which is not something anyone should feel comfortable ceding their life and wellbeing to.
I think someone outside the community seeing this sort of reaction of people inside it being shunned, demonized, etc for threatening to use a very core right that they’re entitled to would likely find it… pretty sketchy.
Because it can easily be construed as “we resolve these things ‘in house,’ via our own methods. No need to get Outsiders involved.”
And man, it sure would be great if we had that sort of high trust effective investigation capability in the community.
But we really have not shown that capability yet, and even if we do, no one should feel like they’re giving up their basic rights to be a member of good standing in the community.
I think many if not most people in Emerson’s position, feeling like they were about to be lied about in a life-destroying way, had facts to rebut the lies, and were being essentially ignored in requests to clarify the truth, would think of legal action.
Whether they would be wrong in how easy it would be to win is a different issue entirely from that very (from base society perspective) normal view.
- Effective Aspersions: How the Nonlinear Investigation Went Wrong by 19 Dec 2023 12:00 UTC; 337 points) (EA Forum;
- Effective Aspersions: How the Nonlinear Investigation Went Wrong by 19 Dec 2023 12:00 UTC; 168 points) (
- 19 Dec 2023 19:14 UTC; 74 points) 's comment on Effective Aspersions: How the Nonlinear Investigation Went Wrong by (
- 14 Dec 2023 22:42 UTC; 1 point) 's comment on Sharing Information About Nonlinear by (EA Forum;
I definitely read all examples as “both at the same time.”
1) Whatever X publicly condemned thing you can think of, it exists on a spectrum.
2) There is a lot more of all instances of it happening than you think there are.
3) A lot of it does not look like the kind you are most likely to notice and condemn.
Thanks for this writeup, still undergoing various updates based on the info above and responses from Nonlinear.
One thing I do want to comment on is this:
(Personal aside: Regarding the texts from Kat Woods shown above — I have to say, if you want to be allies with me, you must not write texts like these. A lot of bad behavior can be learned from, fixed, and forgiven, but if you take actions to prevent me from being able to learn that the bad behavior is even going on, then I have to always be worried that something far worse is happening that I’m not aware of, and indeed I have been quite shocked to discover how bad people’s experiences were working for Nonlinear.)
I agree that it was a bad message to send. I agree that people shouldn’t make it hard for others who have a stake in something to learn about bad behavior from others involved.
But I think it’s actually a bit more complex if you consider the 0 privacy norms that might naturally follow from that, and I can kind of understand where Kat is (potentially) coming from in that message. This doesn’t really apply if Nonlinear was actually being abusive, of course, only if they did things that most people would consider reasonable but which felt unfair to the recipient.
What I mean is basically that it can be tough to know how to act around people who might start shit-talking your organization when them doing so would be defecting on a peace treaty at best, and abusing good-will at worst. And it’s actually generally hard to know if they’re cognizant of that, in my experience.
This is totally independent of who’s “right” or “wrong,” and I have 0 personal knowledge of the Nonlinear stuff. But there are some people who have been to summer camps that we’ve had the opportunity to put on blast about antisocial things they’ve done that got them removed from the ecosystem, but we try to be careful to only do that when it’s *really* egregious, and so often chose not to because it would have felt like too much of an escalation for something that was contained and private......but if they were to shit-talk the camps or how they were treated, that would feel pretty bad from my end in the “Well, fuck, I guess this is what we get for being compassionate” sense.
Many people may think it would be a better world if they imagine everyone’s antisocial acts being immediately widely publicized, but in reality what I think would result is a default stance of “All organizations try to ruin people’s reputations if they believe they did something even slightly antisocial so that they can’t harm their reputation by telling biased stories about them first,” and I think most people would actually find themselves unhappy with that world. (I’m not actually sure about that, though it seems safer to err on the side of caution.)
It can sound sinister or be a bad power dynamic from an organization to an individual, but if an individual genuinely doesn’t seem to realize that the thing holding the org back isn’t primarily a mutual worry of negative reputation harm but something like compassion and general decency norms, it might feel necessary to make that explicit… though of course making it explicit comes off as a threat, which is worse in many ways even if it could have been implicitly understood that the threat of reputation harm existed just from the fact that the organization no longer wants you to work with them.
There are good reasons historically why public bias is in the favor of individuals speaking out against organizations, but I think most people who have worked in organizations know what a headache it can be to deal with the occasional incredibly unreasonable person (again, not saying that’s the case here, just speaking in general), and how hard it is to determine how much to communicate to the outside world when you do encounter someone you think is worse than just a “bad fit.” I think it’s hard to set a policy for that which is fair to everyone, and am generally unsure about what the best thing to do in such cases is.
This was crossposted, so I can’t edit this version’s doc to say:
Please post submissions on the EA Forums version of this post!
Heya! Did you ever get the covers for Origin of Species finalized? Would be curious to see them if so :)
Agreed in principle, though it’s worth noting that more resourced people tend to have less insecurities in general. People who have a stable family, no economic insecurity, positive peer support, etc, end up less susceptible to cults, as well as bad social dynamics in general.
This isn’t to say that people can’t create stable confidence for themselves without those things, only that “dependent confidence” is also a thing that people can have instead that acts protectively, or exposes risk.
Good breakdown of one of the aspects in all this. The insecurity/desperation topic is a really hard one to navigate well, but I agree it’s really important.
Hard because when someone feels like an outsider, a group of other likeminded outsiders will naturally want to help them and welcome them, and it can be an uncomplicated good to do so. Important because if someone has only one source of to supply support, resources, social needs, etc, they are far more likely to turn desperate or do desperate things to maintain their place in the community.
Does this mean we should not accept anyone into the community just because they really want a safe place to avoid broader civilization? I don’t think so, but it’s definitely a flag more people should be aware of, including those who are desperate to belong. Exploitation can happen on a broad and public scale, like organizations looking for volunteers or employees, but it can also be small and private, at the level of group houses and friends made in the community.
Young people in particular who join the community are of course especially at risk here, and it’s a constant struggle at the rationality camps to both welcome and provide opportunities for those who do want to join the broader community rather than just enjoy the camp for its own sake, but not foster dependency.
All good points, and yeah I did consider the issue of “appeals” but considered “accept the judgement you get” part of the implicit (or even explicit if necessary) agreeement made when raising that flag in the first place. Maybe it would require both people to mutually accept it.
But I’m glad the “pool of people” variation was tried, even if it wasn’t sustainable as volunteer work.
FWIW, I don’t avoid posting because of worries of criticism or nitpicking at all. I can’t recall a moment that’s ever happened.
But I do avoid posting once in a while, and avoid commenting, because I don’t always have enough confidence that, if things start to move in an unproductive way, there will be any *resolution* to that.
If I’d been on Lesswrong a lot 10 years ago, this wouldn’t stop me much. I used to be very… well, not happy exactly, but willing, to spend hours fighting the good fight and highlighting all the ways people are being bullies or engaging in bad argument norms or polluting the epistemic commons or using performative Dark Arts and so on.
But moderators of various sites (not LW) have often failed to be able to adjudicate such situations to my satisfaction, and over time I just felt like it wasn’t worth the effort in most cases.
From what I’ve observed, LW mod team is far better than most sites at this. But when I imagine a nearer-to-perfect-world, it does include a lot more “heavy handed” moderation in the form of someone outside of an argument being willing and able to judge and highlight whether someone is failing in some essential way to be a productive conversation partner.
I’m not sure what the best way to do this would be, mechanically, given realistic time and energy constraints. Maybe a special “Flag a moderator” button that has a limited amount of uses per month (increased by account karma?) that calls in a mod to read over the thread and adjudicate? Maybe even that would be too onerous, but *shrugs* There’s probably a scale at which it is valuable for most people while still being insufficient for someone like Duncan. Maybe the amount decreases each time you’re ruled against.
Overall I don’t want to overpromise something like “if LW has a stronger concentration of force expectation for good conversation norms I’d participate 100x more instead of just reading.” But 10x more to begin with, certainly, and maybe more than that over time.
Strong agree. The interesting coordination/incentive questions that come to mind are things like:
Would it help to make criticism have diminishing returns on social status?
Would it help if contribution/building boosts criticism visibility?
How does a society/garden reach the most productive equilibrium of Socrates? The ideal world where each Socrates is doing something meaningfully different from each other is hard to arrive at while each individually feels like they are Fighting the Good Fight.
Thank you both for writing this and sharing your thoughts on the ecosystem in general. It’s always heartening for me, even just as someone who occasionally visits the Bay, to see the amount of attention and thought being put into the effects of things like this on not just the ecosystem there, but also the broader ecosystem that I mostly interact with and work in. Posts like this make me slightly more hopeful for the community’s general health prospects.
Hey Blasted, thanks for sharing :) I remember enjoying Well, will try to check out the others when I get a chance.
Thanks for posting this Adam! (For those that don’t know, I’m Damon)
I think another writing competition would be a good way to encourage stories like this, and am considering what the best way to structure that might be.
Meanwhile, to add a bit more to the sorts of stories I think would be good to see, I think fiction is powerful because it not just allows to grapple with unusual or alien ideas, but also, if written from the perspective of characters with rich inner lives, see the world through a different lens and perspective. When we’re engaged in a character’s experience, their thoughts and reactions and emotions, some part of us can download what it’s like to be that sort of person, and can give us a blueprint for how to act in that sort of situation.Many people outside of the community don’t know what it’s like to be someone who grapples with problems this big, and many people inside of it are desperate for “better” ways to orient to topics that can be frightening, depressing, or painful to think of, such as widespread suffering in the world, or X-risk.
Which is why, among the other types of AI Fables I’d love to see is at least one story about the struggles, internal and external, of a character facing a problem that threatens the world, all while still mostly going about a day-to-day life.
Most stories don’t cover that in particular because most protagonists dealing with such stakes are in constant struggle against it throughout the story. But in our world, for X-risks we face, that’s just not true. Whether you’re trying to prevent nuclear winter or prevent unaligned AGI, you’ll end up spending most of your time among people or in a broader culture that isn’t particularly concerned about it, and in the latter case will likely think you’re kind of weird for worrying about it.
Characters in fiction can do more than entertain or inform us by their actions; they can also inspire us, and give us frames and mental models to help handle difficult emotional situations.
If you have ideas for short stories that might show that, or anything else Adam mentioned, feel free to message me too. Also feel free to reach out if you have thoughts on the best way to solicit such stories; I’m tentatively planning to put something together for late April or May.
I agree that “asserting what someone is doing” can also be considered frame control or manipulation. But I think it’s much less often so, or much less dark artsy, because it’s referencing observable behavior rather than unverifiable/unfalsifiable elements.
Meanwhile the guru might be supplementing this with non-frame-control techniques. When they argue with you, they imply (maybe in a kind but firm voice, maybe with an undertone of social threat) that you’re kinda stupid for disagreeing for them
This exact implication isn’t frame control, but the common thing I’ve seen gurus do that is more subtle is assert why you disagree with them in a way that reinforces their frame.
“Kinda stupid” is overly crude, and might be spotted and feel off even among those who believe in them, but implying you just don’t “get” what they’re saying because you’re unenlightened or not ready for it is very effective at quieting dissent and maintaining their control.
In general this is why I dislike any attempts to assert with confidence what someone thinks or feels, as well as why. I may be one of the only therapists who hates psychoanalysis, but I maintain that it’s almost always a bad thing to to anyone who isn’t inviting it, and sometimes even then.
I don’t think it’s particularly stupid to think this might work; it is in fact how most of our ancestors oriented to relationships. We just have higher standards, these days… for good and for ill.
Great post, will add it to my Relationships Orientations guide.
I will note that society somewhat seems to depend on people prioritizing Building relationships over Entertaining ones, and this is certainly how things worked in the old days such that most of our parents and ancestors did not have the luxury to choose the most entertaining partners. Our standards as a whole have raised when it comes to relationships, in part due to unrealistic fictional representations, but our selective processes for finding partners have not increased proportionally.
It is still (probably) better in most cases to try and find the most happiness you can with a Building relationship if you do want a family, than trying to build a life with someone who primarily fulfills the Entertainment criteria, so long as you and your partner can at least reach stable “contentment.” But people who do so should be very prepared for it to be genuinely hard to maintain a positive relationship with someone over decades without that “spark,” hence the frequency of infidelity and divorce.
Life is just not optimized to give most people ~everything they want in a partner, which can suck to realize, but is (plausibly) important not to fool ourselves about, particularly for monogamous people.
>I think you’re preaching to the choir.
Definitely, but if anyone’s going to disagree in a way that might change my mind or add points I haven’t thought of, I figured it would be people here.