I’ve heard that using zinc lozenges a lot can permanently negatively affect your sense of taste. Did you look into that?
RationalElf
Imagine re Open Phil and hardcore rationalists “the ex-CEO of MIRI now works at Open Phil, and and the CEO of Lightcone is dating an Open Phil employee. These groups have enormous overlap.”
Yes. People can have a lot of social overlap, yet have very different views from one another, especially in the broader Bay Area intellectual ecosystem. My sense is that Anthropic leadership has very different views from most AI safety EAs.
More than 50% of the talent-weighted safety people in EA are literally employees of Anthropic!
Why do you think this? I’m skeptical this is true, especially if you’re including non-technical talent.
Why is there so much emphasis on OpenAI and its arrangement with the Department of War relative to GDM’s and xAI’s, and is that rational? While OpenAI seems like it’s behaving much worse than Anthropic, it seems arguably better than those other two, and I’m worried this is a case of it being punished for doing more than nothing (or rather, that some of the ire currently focused on OpenAI should focus on them).
Agree that OpenAI’s and Department of War’s comms about their arrangement was weird, sketchy, and triggering (but not necessarily worse than complete silence in my mind)
What conclusions do people draw from the Epstein files about the global elite? It seems like a kind of interesting and amazing window into a huge swath of wealthy and powerful people.
I haven’t dug in that much yet, but so far I’m struck by how sleazy many of them seem and how rarely they push back against unethical behavior. And how bad at spelling and inarticulate people who, in other contexts, seem fairly intellectual (e.g. Larry Summers), are in their casual communications.
I’m unsurprised by the lack of evidence of other mass conspiracies.
Absent AI progress, I don’t know why you think it’s going to improve a lot. I did it once when I was 21, and I did it again when I was 29, and the technology was indistinguishable to me. I also think the costs to my career were a lot higher doing it late than doing it young—your time is so low-value in college.
I calculated the second time, and it took me about 50 hours end-to-end.
More broadly, it just feels like a just-so story based on a single anecdote. I think a lot of people do think that their parents loved some of their kids more than others, one could generate a lot of different stories about why that happens and why it doesn’t happen, and how it relates to one’s feelings about oneself and one’s partner. And the evidence/story here doesn’t seem very strong to me.
Your kids are built from all of you and all of your partner—If you love all of that, then you love all of them… your children are roughly a mosaic of you and the person you picked to make them with.
And that means something else too: If you don’t love parts of yourself or your partner, then your children will see that too. If you get angry at yourself for always being late or angry at your partner for always making a mess, then your kids will see you won’t love those parts of them either.
I don’t have kids but I’m not buying this, this sounds very fake to me.
I think even people who love themselves in a healthy way don’t love all parts of themselves equally (ditto for how they relate to their partners. I love my desire for truth more than I love my neurotic insecurities about what parties I get invited to. I love my partner’s love of learning more than I love his forgetfulness about house chores. I don’t think that’s wrong or unhealthy, and I think that there’s massive gulf between getting problematically angry at my partner (or myself) for making a mess and acting like everything he does (or I do) is exactly equally lovable (or worse, thinking it).
As you note, not all of my genes show my phenotype. For example my partner and I could both have recessive genes that a child could inherit such that they end up with a phenotype that has traits totally different from ours. Some of those traits could be really big and really important (e.g. Tay Sachs)
Not all children are children of the same beloved partner. People have children by rape. They have children via one-night stands. They have children via partners that they used to love but no longer love. Maybe you’d bite that bullet and say, “Well I love all my kids equally, but probably parents don’t love their kids equally in situations like that.” (Which I’d find interesting, but seems like it’s not the vibe of your post)
Eh? I think love and friendship are just complicated concepts that involve states of mind and behavior, and people disagree about what should be part of those clusters or not (though I appreciate the concreteness of this proposal)
Being an EA makes this too complicated for me, can’t help thinking about people’s expected impact on the world. There are people I actively dislike who I would easily take a 10% chance of death for, and people who I believe I love but think are deeply harming the world. And even people who I think it would be actively good for the world if they died, and maybe would be worth from an expected value perspective giving up my life except for the fact that murder seems extremely bad, even complicated, thought experiment murder where you’re just walking into a place.
I also think intellectual respect is not a binary trait with respect to whether you have it for an individual or not. I think that you can (and often should) have intellectual respect for an individual on some topics but not others. E.g. I merit no intellectual respect on any topic related to sports. I think a lot of rationality is about trying to deserve intellectual respect on increasingly meta/abstract levels (e.g. while I don’t think I merit any intellectual respect on any topic related to sports, I would hope that I would merit some intellectual respect if I were to try to confer about how to approach learning about sports, because I try to be thoughtful about how to learn new topics in an efficient, unbiased, and truth-seeking way). But I think that even for the people who are most worthy of intellectual respect writ broad, there’s quite a bit of unevenness.
My guess is that what Richard is trying to gesture at, and what I would claim you should maybe do, is separate the concept of moral patienthood and moral agency to a greater extent. Like with a dog, you might love and cherish a child without respecting their policies or their moral reasoning at the level that they’re at. And you might care a lot about their happiness, protecting them from harm, empathizing with their sorrows, meeting their preferences, making them feel comfortable, etc.
Obviously you shouldn’t literally treat an adult exactly the same way you would treat a dog or a child, but I think that there might be a path to channeling respect for them as moral patients who feel, who love, who grieve, who dream, etc. while also completely acknowledging their shortcomings
I guess to reframe another way: Are you incredibly shitty towards babies and dogs? If you are, then (assuming you agree that babies and dogs are moral patients) I would claim that your problem it is about how to treat with care and empathy beings who you don’t intellectually respect. It’s not (just) about how to find a path to intellectually respecting adults that don’t merit it because there will always be beings that merit empathy and love but not intellectual respect.
Generalizing just a little bit beyond rape fantasies: AFAICT, being verbally asked for consent is super-duper a turn off for most women. Same with having to initiate sex;
Neither of these links contain statistics about what fraction of women like being verbally asked for consent or dislike having to initiate sex. They’re literally just one woman talking about her experience. I don’t think this is very good evidence for your claim which is pretty central to your post.
I don’t have a strong guess about what fraction of women strongly dislike being verbally asked for sex (especially if it’s done reasonably skillfully and non-robotically); just to add another anecdote since we’re apparently trading anecdotes around, I am a woman who would strongly dislike not being asked verbally before having sex with someone for the first time.
I would guess that more than half dislike having to initiate sex, based on vibes, but I’m pretty uncertain what fraction would say this is the case
So let’s start with some statistics from Lehmiller[1]: roughly two thirds of women and half of men have some fantasy of being raped.
Could you include more details about these statistics? “X% of people have ever had some fantasy.” is extremely different from “the same X% of people have that fantasy most times when they masturbate” or whatever the case might be. I also care about how careful they were to distinguish between “x% of people have imagined rape occurring vividly” vs. “the same x% of people have actually fantasized about it as a pleasurable, sexually arousing experience.”It includes a much higher percent of people having rape fantasies than other statistics I’ve seen (e.g. here) (including, if you worded this correctly, half of men having fantasies about being raped, which would actually surprise me even more than the two-thirds of women statistic)
seems probably legally risky.
“cowardly” because my strong guess is that their actions were driven by fear of social censure rather than calculated attempts to minimize losses. If they were trying to minimize losses to their non-selfish goals of ousting Sam A, who I think they believed to be a bad and dangerous actor, that would have been better served by coming clean about why they did what they did.
I agree, but I think both occurred. they had a long-term secret plan and tried to execute it (a scheme), and then when it went poorly they acted based on fear (or possible just complete disregard for the truth and interests of others).
Am I understanding correctly that recent revelations from Ilya’s deposition (e.g. looking at the parts here) suggest Ilya Sutskever and Mira Murati seem like very selfish and/or cowardly people? They seem approximately as scheming or manipulative as Sam Altman, if maybe more cowardly and less competent.
My understanding from is that they were basically wholly responsible for causing the board to try to fire Sam Altman. But when it went south, they actively sabotaged the firing (e.g. Mira disavowing it and trying to retain her role, Ilya saying he regrets it) and then let Helen Toner, Tasha McCauley, and effective altruism / AI safety take the blame almost completely, for years (as Zvi notes in the post linked above). I think this is a really really bad thing to do!
Am I understanding this correctly?
Another is concern that the cure is worse than the disease. I.e. the drama and relationship damage caused by trying to expel them in the community might hurt the community more than removing them. Like there are scissor statements, there are also scissor people.
You might be in a community where you don’t think people will agree with you that they’re a bad actor, even if you can establish the truth about what events occurred in the world, because there’s a value disagreement between you and your community.
Also concern about them and their well-being. Being publicly ostracized is very traumatizing and scary for most people. Particularly if they seem mentally fragile, you might fear the consequences for them or potentially for others who aren’t just you if they’re forced to endure a public ousting. You might fear or be averse to causing them pain. You might have sympathy for them, particularly if you think the sense in which they’re a bad actor was in turn caused by something bad happening to them.
You might fear that exposing their bad behavior will bring harm to others who are associated with them. For example, if they’re part of some oppressed minority group and you fear that people will overgeneralize from their bad behavior to being mistrustful of or more prejudiced against others.
Tone note: I really don’t like people responding to other people’s claims with content like “No. Bad… Bad naive consequentialism” (I’m totally fine with “Really not what I support. Strong disagree.”). It reads quite strongly to me as trying to scold someone or socially punish them using social status for a claim that you disagree with; they feel continuous with some kind of frame that’s like “habryka is the arbiter of the Good”
Personally? In various complicated ways. I wasn’t advocating for always attending to such things, just disputing that highly time-sensitive messages rarely come about at all.
Did you look into Profi and other nasal sprays like that?