I also think intellectual respect is not a binary trait with respect to whether you have it for an individual or not. I think that you can (and often should) have intellectual respect for an individual on some topics but not others. E.g. I merit no intellectual respect on any topic related to sports. I think a lot of rationality is about trying to deserve intellectual respect on increasingly meta/abstract levels (e.g. while I don’t think I merit any intellectual respect on any topic related to sports, I would hope that I would merit some intellectual respect if I were to try to confer about how to approach learning about sports, because I try to be thoughtful about how to learn new topics in an efficient, unbiased, and truth-seeking way). But I think that even for the people who are most worthy of intellectual respect writ broad, there’s quite a bit of unevenness.
RationalElf
My guess is that what Richard is trying to gesture at, and what I would claim you should maybe do, is separate the concept of moral patienthood and moral agency to a greater extent. Like with a dog, you might love and cherish a child without respecting their policies or their moral reasoning at the level that they’re at. And you might care a lot about their happiness, protecting them from harm, empathizing with their sorrows, meeting their preferences, making them feel comfortable, etc.
Obviously you shouldn’t literally treat an adult exactly the same way you would treat a dog or a child, but I think that there might be a path to channeling respect for them as moral patients who feel, who love, who grieve, who dream, etc. while also completely acknowledging their shortcomings
I guess to reframe another way: Are you incredibly shitty towards babies and dogs? If you are, then (assuming you agree that babies and dogs are moral patients) I would claim that your problem it is about how to treat with care and empathy beings who you don’t intellectually respect. It’s not (just) about how to find a path to intellectually respecting adults that don’t merit it because there will always be beings that merit empathy and love but not intellectual respect.
Generalizing just a little bit beyond rape fantasies: AFAICT, being verbally asked for consent is super-duper a turn off for most women. Same with having to initiate sex;
Neither of these links contain statistics about what fraction of women like being verbally asked for consent or dislike having to initiate sex. They’re literally just one woman talking about her experience. I don’t think this is very good evidence for your claim which is pretty central to your post.
I don’t have a strong guess about what fraction of women strongly dislike being verbally asked for sex (especially if it’s done reasonably skillfully and non-robotically); just to add another anecdote since we’re apparently trading anecdotes around, I am a woman who would strongly dislike not being asked verbally before having sex with someone for the first time.
I would guess that more than half dislike having to initiate sex, based on vibes, but I’m pretty uncertain what fraction would say this is the case
So let’s start with some statistics from Lehmiller[1]: roughly two thirds of women and half of men have some fantasy of being raped.
Could you include more details about these statistics? “X% of people have ever had some fantasy.” is extremely different from “the same X% of people have that fantasy most times when they masturbate” or whatever the case might be. I also care about how careful they were to distinguish between “x% of people have imagined rape occurring vividly” vs. “the same x% of people have actually fantasized about it as a pleasurable, sexually arousing experience.”It includes a much higher percent of people having rape fantasies than other statistics I’ve seen (e.g. here) (including, if you worded this correctly, half of men having fantasies about being raped, which would actually surprise me even more than the two-thirds of women statistic)
seems probably legally risky.
“cowardly” because my strong guess is that their actions were driven by fear of social censure rather than calculated attempts to minimize losses. If they were trying to minimize losses to their non-selfish goals of ousting Sam A, who I think they believed to be a bad and dangerous actor, that would have been better served by coming clean about why they did what they did.
I agree, but I think both occurred. they had a long-term secret plan and tried to execute it (a scheme), and then when it went poorly they acted based on fear (or possible just complete disregard for the truth and interests of others).
RationalElf’s Shortform
Am I understanding correctly that recent revelations from Ilya’s deposition (e.g. looking at the parts here) suggest Ilya Sutskever and Mira Murati seem like very selfish and/or cowardly people? They seem approximately as scheming or manipulative as Sam Altman, if maybe more cowardly and less competent.
My understanding from is that they were basically wholly responsible for causing the board to try to fire Sam Altman. But when it went south, they actively sabotaged the firing (e.g. Mira disavowing it and trying to retain her role, Ilya saying he regrets it) and then let Helen Toner, Tasha McCauley, and effective altruism / AI safety take the blame almost completely, for years (as Zvi notes in the post linked above). I think this is a really really bad thing to do!
Am I understanding this correctly?
Another is concern that the cure is worse than the disease. I.e. the drama and relationship damage caused by trying to expel them in the community might hurt the community more than removing them. Like there are scissor statements, there are also scissor people.
You might be in a community where you don’t think people will agree with you that they’re a bad actor, even if you can establish the truth about what events occurred in the world, because there’s a value disagreement between you and your community.
Also concern about them and their well-being. Being publicly ostracized is very traumatizing and scary for most people. Particularly if they seem mentally fragile, you might fear the consequences for them or potentially for others who aren’t just you if they’re forced to endure a public ousting. You might fear or be averse to causing them pain. You might have sympathy for them, particularly if you think the sense in which they’re a bad actor was in turn caused by something bad happening to them.
You might fear that exposing their bad behavior will bring harm to others who are associated with them. For example, if they’re part of some oppressed minority group and you fear that people will overgeneralize from their bad behavior to being mistrustful of or more prejudiced against others.
Tone note: I really don’t like people responding to other people’s claims with content like “No. Bad… Bad naive consequentialism” (I’m totally fine with “Really not what I support. Strong disagree.”). It reads quite strongly to me as trying to scold someone or socially punish them using social status for a claim that you disagree with; they feel continuous with some kind of frame that’s like “habryka is the arbiter of the Good”
Personally? In various complicated ways. I wasn’t advocating for always attending to such things, just disputing that highly time-sensitive messages rarely come about at all.
I agree with and really like most of this post.
There are some things your phone can tell you that are urgent, like someone changing plans at the last minute. But that is not so urgent that you couldn’t wait to pull over.
I think I experience quite a lot of things that are very time-sensitive (though they’re rarely important), more time-sensitive than you indicated. E.g. my friend is at the grocery store buying some items for a dinner party we’re throwing together. They ask, “Do you have flour or should I buy some? I’m on the checkout line.” Or my partner is about to leave the house and asks which bottle of wine to bring as a gift to the party we’re going to, and if he waits another few min, he will miss the upcoming train and be late. These things are often urgent on the scale of 1-7 min.
In the ship (and corporations) case, this seems like a weird semantic thing where we use the word “legal person” in a way that’s very different from what people colloquially mean by “person”, and only affords a small fraction of the legal rights and responsibilities that human persons generally have. The other two examples seem more in between
Thanks for writing, seems like an important topic! Given that (you said) 86% of people have HSV 1 or 2 (and those who don’t are probably disproportionately children, who are unlikely to read your post on LW), advice about mitigating downsides of having the viruses seems potentially more useful than advice about avoiding them (but maybe there are no good mitigations).
I think this is false because that is only the Open Phil 501(c)(3) and Open Phil also employs lots of people at an LLC as well, but that doesn’t file a 990
Even for the attorneys general, I think you could make a case that there ought to be some sort of social punishment, even if the way that they acted was in some sense normal or above-average. That could be both because we want to change the norm / incentivize better behavior in the future and for decision theory reasons (even if what they did was normal or above-average compared to how most attorneys general handle most cases, we might want it to be the case that people think that they’ll be remembered badly by history if they so suboptimally in such important circumstances)
I feel ambivalent and complicated about this. In some objective sense, I think that the Attorneys General enabled a huge theft and (I think more importantly) made humanity a lot less safe than it could have been if they had acted in a different way that was also totally within their power. So in an objective sense they enabled great harm.
On the other hand I get the sense that they did a lot more than they could have and than most people who are more knowledgeable about this kind of thing expected them to, and the negotiation seems complicated enough that it seems like they at least tried to engage on the issue (an area they were probably unfamiliar with and not well-staffed to adjudicate) in a pretty deep way. They were probably under enormous pressure. I also get the sense that Attorney General Jennings is less susceptible to pressure from companies and more concerned with the rule of law than most attorneys general. And so in a relative sense, I think that it’s possible that they did a pretty good job.
I feel worse about the board members, both because I think this was much more directly their responsibility, and because I generally get the sense that they allow or even encourage a lot of egregious behavior from OpenAI in general that’s contrary to OpenAI’s mission. Compared to the reference class of nonprofit board members, I think they perform much more poorly than Jennings does to the reference class of attorneys general.
Not necessarily a counterpoint to your main point, but Lightcone’s headquarters is not in San Francisco. It’s in Berkeley, which is a small city of its own with a very different vibe than most of San Francisco (it’s greener, less dense, more suburban, fewer tall buildings, fairly walkable and cute in most parts.)
Eh? I think love and friendship are just complicated concepts that involve states of mind and behavior, and people disagree about what should be part of those clusters or not (though I appreciate the concreteness of this proposal)
Being an EA makes this too complicated for me, can’t help thinking about people’s expected impact on the world. There are people I actively dislike who I would easily take a 10% chance of death for, and people who I believe I love but think are deeply harming the world. And even people who I think it would be actively good for the world if they died, and maybe would be worth from an expected value perspective giving up my life except for the fact that murder seems extremely bad, even complicated, thought experiment murder where you’re just walking into a place.