I appreciate you trying to explain. I literally still don’t understand.
Epistemically speaking you are making very confident sweeping generalizations about something which is at best a tentative evopsych theory and at worst utter nonsense.
The post is definitely speculative. Would it seem less bad if it were labeled as speculative? One of the sentences in the post is
(I’m ignoring the complexity of tribal living, so this is all a somewhat cartoon picture.)
The basic observation that women are relatively more interested in people is a standardly claimed psych finding. Not saying it’s not controversial, just that I’m not making it up. E.g. this paper https://www.frontiersin.org/articles/10.3389/fpsyg.2015.00189/full has 335 citations. (I should have included that in the post.)
Epistemically speaking you are making very confident sweeping generalizations about something which is at best a tentative evopsych theory and at worst utter nonsense.
Could you be more specific? I think all the claims here are pretty obvious, except that this one is pretty speculative:
(Hence females tending on average to be relatively more interested in people over things, compared to males.)
Socially, this is incredibly dehumanizing and othering.
I agree that dehumanizing and othering is bad. I literally don’t see what’s dehumanizing here or what’s othering here. Can someone explain? I reread my post twice and still don’t get it. My guess is that trying to describe something about a feature of a group of people is being taken as othering. But like, surely that’s an okay thing to do somehow?
Women are not alien intelligences.
Of course. One of the sentences in the post ends with:
It would be somewhat less bad if it had been more clearly labeled speculative, but that’s not the fundamental issue. “cartoon” implies to me something like Newton’s laws—not correct exactly, but a good enough model to be going on for the purposes of the conversation. I think your object-level evopsych statements are closer to, uh, I don’t actually know physics nearly well enough to complete the analogy. Some sort of theory of a phenomenon that is not entirely proven to even exist, with some evidence for and some against, which a small minority group of scientists present as settled science and procede to write further papers using it as an assumption.
I was not saying you had made the claim up, but presenting controversial claims with no hedging is not great. As for everything else, your post implies strongly, without stating outright, various narratives about human motivations/evolution that are not, in fact, obvious. For instance, that women want to secure the loyalty of one man, while men want to have sex with as many women as possible, and that this adversarial dynamic is present in the modern day and results in women, in particular, having unique insight into figuring out the motives of partially aligned intelligences due to practice on men.
It’s okay to describe features of a group of people. Which features you’re describing, how you present your claims, and whether you’re in fact right all matter. In this case, you are, implicitly, making the claim that the difference between men and women is large enough that it makes sense to try to draw an analogy to the difference between humans and AIs, even though you explicitly stated that of course the difference is not as large.
To put it another way, I don’t actually see what using women and men here adds to the analogy beyond “sometimes, humans have to suss out the true intentions of other humans who partially share goals with them when those other humans have motive to deceive them”. To the extent that you are claiming there is a meaningful difference, I think that is [not entirely sure I am phrasing the following correctly] privileging gender as a special axis of human difference in a way that I think is meaningfully wrong and also find unpleasant.
(Somewhat more incidentally, I and many other women I know dislike the use of “females”, “mate”, etc in this context, though that is somewhat trivial and not actually a big deal so much as often correlated with things that do actually bother me.)
A guess about what’s happening: you’re seeing that I said “X” and you’re inferring that I believe “Y” because a lot of people who go around saying “X” also say “Y”. And you’re worried about that, because people who say “Y” have a disturbing pattern of going around mysteriously not noticing all the counterevidence against Y, and also advocating for harming others on the basis of Y being true. That’s a reasonable thing to worry about if you have good reason to think there are such people. But I think responding by punishing people who say “X”, while understandable, is an escalatory sort of action, and is a bad long term solution, and adds to the big pile of people silencing each other. So my somewhat prickly olive branch is: if this is something like what’s really going on, let’s talk about that explicitly.
As for everything else, your post implies strongly, without stating outright, various narratives about human motivations/evolution that are not, in fact, obvious. For instance, that women want to secure the loyalty of one man, while men want to have sex with as many women as possible, and that this adversarial dynamic is present in the modern day and results in women, in particular, having unique insight into figuring out the motives of partially aligned intelligences due to practice on men.
How does the post imply that? As you’ve stated them, I don’t agree with any of those things, and I didn’t say them, and I didn’t say anything that implied them, except that I said there is some (other) reason that might result in women in particular having unique insight.
In this case, you are, implicitly, making the claim that the difference between men and women is large enough that it makes sense to try to draw an analogy to the difference between humans and AIs, even though you explicitly stated that of course the difference is not as large.
No I’m not! Men and women are the same on any “human to AI” dimension! The analogy doesn’t rest on differences between men and women, except that there’s a desire to align in that direction, as described, coming from different incentives. I’m not making this claim that you’re saying I’m making! It’s other people’s fault if they make up an interpretation that I didn’t say and then ding me for saying that thing I didn’t say. The only analogy is that it’s a general intelligence trying to align another general intelligence.
I don’t actually see what using women and men here adds to the analogy
It’s an especially strong case of incentive to interpersonally suss out intentions. It’s the strongest one I could think of. What are some other very strong cases?
in a way that I think is meaningfully wrong
Why do you think it’s meaningfully wrong? Do you mean incorrect, or morally wrong?
I appreciate you trying to explain. I literally still don’t understand.
The post is definitely speculative. Would it seem less bad if it were labeled as speculative? One of the sentences in the post is
The basic observation that women are relatively more interested in people is a standardly claimed psych finding. Not saying it’s not controversial, just that I’m not making it up. E.g. this paper https://www.frontiersin.org/articles/10.3389/fpsyg.2015.00189/full has 335 citations. (I should have included that in the post.)
Could you be more specific? I think all the claims here are pretty obvious, except that this one is pretty speculative:
I agree that dehumanizing and othering is bad. I literally don’t see what’s dehumanizing here or what’s othering here. Can someone explain? I reread my post twice and still don’t get it. My guess is that trying to describe something about a feature of a group of people is being taken as othering. But like, surely that’s an okay thing to do somehow?
Of course. One of the sentences in the post ends with:
Is it related to this experience that I’ve had? https://www.lesswrong.com/posts/bTsYQHfghTwZGnqPS/defensiveness-as-hysterical-invisibility
It would be somewhat less bad if it had been more clearly labeled speculative, but that’s not the fundamental issue. “cartoon” implies to me something like Newton’s laws—not correct exactly, but a good enough model to be going on for the purposes of the conversation. I think your object-level evopsych statements are closer to, uh, I don’t actually know physics nearly well enough to complete the analogy. Some sort of theory of a phenomenon that is not entirely proven to even exist, with some evidence for and some against, which a small minority group of scientists present as settled science and procede to write further papers using it as an assumption.
I was not saying you had made the claim up, but presenting controversial claims with no hedging is not great. As for everything else, your post implies strongly, without stating outright, various narratives about human motivations/evolution that are not, in fact, obvious. For instance, that women want to secure the loyalty of one man, while men want to have sex with as many women as possible, and that this adversarial dynamic is present in the modern day and results in women, in particular, having unique insight into figuring out the motives of partially aligned intelligences due to practice on men.
It’s okay to describe features of a group of people. Which features you’re describing, how you present your claims, and whether you’re in fact right all matter. In this case, you are, implicitly, making the claim that the difference between men and women is large enough that it makes sense to try to draw an analogy to the difference between humans and AIs, even though you explicitly stated that of course the difference is not as large.
To put it another way, I don’t actually see what using women and men here adds to the analogy beyond “sometimes, humans have to suss out the true intentions of other humans who partially share goals with them when those other humans have motive to deceive them”. To the extent that you are claiming there is a meaningful difference, I think that is [not entirely sure I am phrasing the following correctly] privileging gender as a special axis of human difference in a way that I think is meaningfully wrong and also find unpleasant.
(Somewhat more incidentally, I and many other women I know dislike the use of “females”, “mate”, etc in this context, though that is somewhat trivial and not actually a big deal so much as often correlated with things that do actually bother me.)
Thanks for engaging though, I continue to be grateful for you making the effort to help me understand what’s happening, including harms.
A guess about what’s happening: you’re seeing that I said “X” and you’re inferring that I believe “Y” because a lot of people who go around saying “X” also say “Y”. And you’re worried about that, because people who say “Y” have a disturbing pattern of going around mysteriously not noticing all the counterevidence against Y, and also advocating for harming others on the basis of Y being true. That’s a reasonable thing to worry about if you have good reason to think there are such people. But I think responding by punishing people who say “X”, while understandable, is an escalatory sort of action, and is a bad long term solution, and adds to the big pile of people silencing each other. So my somewhat prickly olive branch is: if this is something like what’s really going on, let’s talk about that explicitly.
How does the post imply that? As you’ve stated them, I don’t agree with any of those things, and I didn’t say them, and I didn’t say anything that implied them, except that I said there is some (other) reason that might result in women in particular having unique insight.
No I’m not! Men and women are the same on any “human to AI” dimension! The analogy doesn’t rest on differences between men and women, except that there’s a desire to align in that direction, as described, coming from different incentives. I’m not making this claim that you’re saying I’m making! It’s other people’s fault if they make up an interpretation that I didn’t say and then ding me for saying that thing I didn’t say. The only analogy is that it’s a general intelligence trying to align another general intelligence.
It’s an especially strong case of incentive to interpersonally suss out intentions. It’s the strongest one I could think of. What are some other very strong cases?
Why do you think it’s meaningfully wrong? Do you mean incorrect, or morally wrong?