I want to push back on anyone downvoting this because it’s sexist, dehumanizing, and othering (rather than just being a bad model). I am sad if a model/analogy has those negative effect, but supposing the model/analogy in fact held and was informative, I think we should be able to discuss it. And even the possibility that something in the realm of gender relations has relevant lessons for Alignment seems like we should be able to discuss it.
Or alternatively stated, I want to push for Decoupling norms here.
In contexts where the model will not be used to make decisions about humans (which are rare!), sexist is when something is a bad model in the direction of sexism. There are real differences; accurate representations of them are not sexism. Those differences are quite small, and are often misunderstood as large in ways that produce nonsenical models. As @eukaryote wrote, the specific evopsych proposal under consideration here is privileging a hypothesis.
Alternatively stated, you cannot convince me to decouple when there are real mechanistic reasons that the coupling exists, because then you’re simply asking me to suspend my epistemic evaluation of the model.
Of course, I also simply don’t believe in decoupling norms in general because reductionism doesn’t work to find the true mechanisms of reality in contexts where the mechanisms have significant amounts of complexity which is computationally intractable to discover by simulation, and therefore for practical purposes only exist as shapes in the macroscopic structure of worldstate; and decoupling/reductionism based models reliably mismodel those sorts of complex systems. One needs instead to figure out how to abstract over the coupling.
What do you mean “privileging a hypothesis”? The LW concept https://www.lesswrong.com/tag/privileging-the-hypothesis is about raising a hypothesis to consideration without enough to point to that hypothesis. I gave reasons for raising this hypothesis.
What does decoupling have to do with reductionism? Decoupling doesn’t mean “do reductionism”, it means decoupling factual questions from social / political tone and conflict. [Edit: I was partially wrong. The concept of “high/low-decouplers” described here https://www.reddit.com/r/slatestarcodex/comments/8fnch2/high_decouplers_and_low_decouplers/ is sort of related to reductionism, though is far from the same thing (because what you’re decoupling can be a high-level claim, holistic in the sense of abstract, if not holistic in the sense of letting in all the context). The idea of decoupling norms as described in the post Ruby linked, https://www.lesswrong.com/posts/7cAsBPGh98pGyrhz9/decoupling-vs-contextualising-norms , is as I said, though more precisely stated as being about implications in general.]
In addition to what gears said, I think the sexist othering etc is not actually critical to the analogy, which is kind of the problem. “Figuring out the motives of people who kind of share goals with you but also have reasons to lie” is a pretty universal human experience. Adding some gender evopsych on top is just annoying (and prevents thinking about many of the more interesting ways in which this dynamic can play out).
I agree it’s not strictly critical to the analogy, and my rewrite removes the evpsych. But I actually think that this specific dynamic is plausibly the single most intense case of this dynamic, which is why I wrote about it specifically, and why the rewrite seems less interesting to me. What are some other cases where there are comparably strong pressures?
I want to push back on anyone downvoting this because it’s sexist, dehumanizing, and othering (rather than just being a bad model). I am sad if a model/analogy has those negative effect, but supposing the model/analogy in fact held and was informative, I think we should be able to discuss it. And even the possibility that something in the realm of gender relations has relevant lessons for Alignment seems like we should be able to discuss it.
Or alternatively stated, I want to push for Decoupling norms here.
In contexts where the model will not be used to make decisions about humans (which are rare!), sexist is when something is a bad model in the direction of sexism. There are real differences; accurate representations of them are not sexism. Those differences are quite small, and are often misunderstood as large in ways that produce nonsenical models. As @eukaryote wrote, the specific evopsych proposal under consideration here is privileging a hypothesis.
Alternatively stated, you cannot convince me to decouple when there are real mechanistic reasons that the coupling exists, because then you’re simply asking me to suspend my epistemic evaluation of the model.
Of course, I also simply don’t believe in decoupling norms in general because reductionism doesn’t work to find the true mechanisms of reality in contexts where the mechanisms have significant amounts of complexity which is computationally intractable to discover by simulation, and therefore for practical purposes only exist as shapes in the macroscopic structure of worldstate; and decoupling/reductionism based models reliably mismodel those sorts of complex systems. One needs instead to figure out how to abstract over the coupling.
What do you mean “privileging a hypothesis”? The LW concept https://www.lesswrong.com/tag/privileging-the-hypothesis is about raising a hypothesis to consideration without enough to point to that hypothesis. I gave reasons for raising this hypothesis.
What does decoupling have to do with reductionism? Decoupling doesn’t mean “do reductionism”, it means decoupling factual questions from social / political tone and conflict. [Edit: I was partially wrong. The concept of “high/low-decouplers” described here https://www.reddit.com/r/slatestarcodex/comments/8fnch2/high_decouplers_and_low_decouplers/ is sort of related to reductionism, though is far from the same thing (because what you’re decoupling can be a high-level claim, holistic in the sense of abstract, if not holistic in the sense of letting in all the context). The idea of decoupling norms as described in the post Ruby linked, https://www.lesswrong.com/posts/7cAsBPGh98pGyrhz9/decoupling-vs-contextualising-norms , is as I said, though more precisely stated as being about implications in general.]
In addition to what gears said, I think the sexist othering etc is not actually critical to the analogy, which is kind of the problem. “Figuring out the motives of people who kind of share goals with you but also have reasons to lie” is a pretty universal human experience. Adding some gender evopsych on top is just annoying (and prevents thinking about many of the more interesting ways in which this dynamic can play out).
I agree it’s not strictly critical to the analogy, and my rewrite removes the evpsych. But I actually think that this specific dynamic is plausibly the single most intense case of this dynamic, which is why I wrote about it specifically, and why the rewrite seems less interesting to me. What are some other cases where there are comparably strong pressures?