Responding to Zack’s comment here in a new thread since the other thread went in a different direction.
The thing me and my allies were hoping for a “court ruling” on was not about who should or shouldn’t be held in high regard, but about the philosophical claim that one “ought to accept an unexpected [X] or two deep inside the conceptual boundaries of what would normally be considered [Y] if [positive consequence]”. (I think this is false.)That’s what really matters, not who we should or shouldn’t hold in high regard.
I found this a helpful crisp summary of the actual thing you want. (I realize you’re not done with this blogseries yet, and probably the blogseries is serving multiple purposes, but insofar as what-you-wanted hasn’t happened, I think writing a post at the end that succinctly spells out the things you want that you don’t feel like you’ve gotten yet would probably be worthwhile)
A thing I’m still somewhat fuzzy on is whether you think this court ruling is about “on LessWrong / in truth-focused contexts/communities”, “worldwide in all contexts” (or something in-between), and insofar as you think it’s “worldwide in all contexts”, if you think this has the same degree of crisp “there’s a mathematical right answer here” or “this is the correct tradeoff to make given a messy world.”
I think one of your central points of contention here is that the bolded sentence is false:
If I’m willing to accept an unexpected chunk of Turkey deep inside Syrian territory to honor some random dead guy – and I better, or else a platoon of Turkish special forces will want to have a word with me – then I ought to accept an unexpected man or two deep inside the conceptual boundaries of what would normally be considered female if it’ll save someone’s life. There’s no rule of rationality saying that I shouldn’t, and there are plenty of rules of human decency saying that I should.
...and Scott should retract it and we should all agree he should retract it (and that refusal to do this makes for a sort of epistemic black hole that ripples outward in a contagious lies sort of way, which has pretty bad consequences on our collective mapmaking as well as just being factually-wrong in isolation)
(I agree with the above claim. I think both rationalists and broader society should get called out on claims like this)
One more claim I’m pretty sure you’re making that I agree with is “it’s important to preserve a category for ‘actual women’ that you can still talk about.”
I’m not sure if you also think:
self-described-rationalists shouldn’t be willing to accept the social category of women including not-very-successfully-transitioned transwomen because of reasoning like “there are rules of rationality suggesting this has epistemic costs, but the social benefits outweigh those costs.”
broader society shouldn’t have that social category.
I’m separating out “rationalist society” and “broader society” because they seem different to me. Rationalists have spec’d into prioritizing truth/mapmaking. I’m confidently glad someone is doing that. I’d confidently argue broader society should be more into truth/mapmaking but I’m not sure what steps I’d take in what order to achieve that, and what tradeoffs to make along the way given various messy realities.
(Or: since “rationalist” is a made up label and there’s nothing intrinsically wrong with being “more truthseeking-focused without making it your Primary Deal”, a better phrasing is “if you are trying to be truthseeking focused and you haven’t noticed or cared that you subverted a rationality principle for political expedience, you should be pretty concerned that you are doing this in other domains, or that it has more contagious-lie effects than you’re acknowledging. And other people should be suspicious of you and not grant you as much credibility as a “rationalist”. You should at the very least notice the edges of your truthseeking competence)
There’s some other specific claims I’m not sure if you’re making but I’ll leave it there for now.
Thanks for your patience. For the most part, I try to be reluctant to issue proclamations about what other people “should” do, as I explain in “I Don’t Do Policy”. (“Should” claims can be decomposed into conditional predictions about consequences given actions, and preferences over consequences. The conditional predictions can be evaluated on their merits, and I don’t control other people’s preferences.) In particular, I don’t think there’s One True social gender convention.
The “court ruling” thing was an unusual case where, you know, I had been under the impression that this subculture with Eliezer Yudkowsky as its acknowledged leader was serious about being “spec’d into prioritizing truth/mapmaking”. It really seemed like the kind of thing that shouldn’t be hard to clear up—that he would perceive an interest in clearing up, after the problem had been explained.
My position at the time was, “Scott should retract it and we should all agree he should retract it”. Since that didn’t happen despite absurd efforts, my conclusion is more that there is no “we”. It would be nice if there were a subculture that had spec’d into prioritzing truth/mapmaking, but our little cult/patronage-network doesn’t deserve credit for that: on the margin, I’m now more interested in efforts to break people of the delusion that the ingroup has a monopoly on Truth and Goodness, than I am in efforts to get the ingroup to live up to its marketing material about being the place in the world for people who are interested in Truth and Goodness.
Obviously, this doesn’t entail giving up on Truth and Goodness! (“Thinking” and “writing” and “helping people” were not invented here.) It doesn’t even entail cutting ties with “the community.” (I am, actually, still using this website, at least for now.) It does involve propagating the implication from “‘rationalist’ is a made up label” (as you say) to applying the same standards of moral reasoning to the ingroup’s cult/patronage-network and broader Society (and not confusing the former thing for “rationalist Society”).
Because you know, maybe the “community” model was just never a good idea to begin with? I’m worried that if I were to accept your support in enforcing norms about the category-boundary thing, then the elephant in my brain might think I therefore owed you a favor and should stop giving you such a hard time about the free-speech-for-disagreeable-people thing. As the post notes, that’s not how it works. Or maybe it is how human communities work, but it’s not how the ideal justice of systematically correct reasoning works.
Okay, rewriting this to check my understanding, you’re saying:
In a rationalist community that was actively pretty successful at being good at mapmaking, more people would have proactively noticed that that particular line of Scott’s was false. The fact that this didn’t happen is evidence about the state of how much one should trust (and have trusted) the rationality community to live up to its marketing material.
But, rather than being primarily interested in actually prosecuting that case, at this point you think it’s more important to drive home the point “we’re already living in the world where the community failed to notice that this was a rationality test and they failed”, and that this has implications for how people should think of “the rationality community” (or lack thereof)
I didn’t quite understand the second clause until you spelled it out just now, thanks.
(I’m not sure whether this is particularly different from your model here? I guess the major diff in-this-domain is that I’m still more optimistic about solutions mediated through “having a community with shared norms”, and you think that’s sufficiently likely to be net-negative by default that one should be pretty skeptical about that by default?)
...
Mostly separate point:
I agree somewhat directionally with the second point, but don’t think it’s as big a deal as you think. I think you’re basically right that you were gaslit to some degree by people with politically-motivated-cognition. But, some other considerations at play are:
It’s in fact non-obvious what the right answer is (it seems like Eliezer or Scott shouldn’t get this wrong or take much convincing, but, see below for additional problems, and I think this was at least a nontrivial part of my own confusion earlier on. Your blog posts that focused on the math helped me)
It sounded like it was in some cases unclear to people what question you were trying to argue (based on your description of some arguments with Scott and Kelsey) .i.e. “self-identity isn’t a good way to define male/female” vs “it matters in the first place that words mean things, and that people doing ‘shared mapmaking’ should at least notice and care about tradeoffs re: mapmakability.
(this includes multi-level frame-mismatches, i.e. the thing where you “don’t do policy” is actually surprising and non-obvious to people)
I think you in fact have multiple goals or claims, and even if the “Scott you should retract this one sentence” was the most important one originally, I think people were correct to not believe that was your only goal, and to be suspicious of your framing?
People have different experiences of how “thought policed” they perceive themselves as being, which changes their initial guesses of how bad the problem you’re trying to point at are. (i.e. I recently had an argument with someone about why they should care about your deal, and was like ‘Zack doesn’t like people telling him what to think, especially when they are telling him how to think wrong’, and Bob responded ‘but nobody is telling him how to think!’ and I said ’oh holy hell they are absolutely telling him how to think. I can think of at least one concrete FB argument where one rationalist-you-know was specifically upset at another rationalist-you-know for not thinking of them as male, not merely what words they used. Bob was surprised, and updated).
And then a last point (which’d probably sound rude of me to bring up to most people, but seems important here), is, well, at least since 2018, you’ve had a pretty strong vibe of “seeming to relate to this in a zealous and unhealthy way”. It’s hard to tell the difference between “people are avoiding the real conversation due to leftist political reasons” and “people are avoiding the conversation because you just seem… a little crazy”. It’s especially hard to tell the difference when both are true at the same time.
All five points (i.e. those four bullets + the sort of political mindkilledness you seem to be primarily hypothesizing) have different mechanisms. But they overlap and blur together and it’s hard to tell which are most significant. I think you’d probably agree all 4 are relevant, but maybe attribute > 60% of the causality to the “political mindkilled and/or political expedience” aspect, where I think that’s… maybe like 30-45% of it?”. Which is still a lot, but, being less than 50% of the causality changes my relationship with it?)
...
(I didn’t quite find a place to link to it inline but this whole thing is one of the reasons I wrote Norm Innovation and Theory of Mind)
Responding to Zack’s comment here in a new thread since the other thread went in a different direction.
I found this a helpful crisp summary of the actual thing you want. (I realize you’re not done with this blogseries yet, and probably the blogseries is serving multiple purposes, but insofar as what-you-wanted hasn’t happened, I think writing a post at the end that succinctly spells out the things you want that you don’t feel like you’ve gotten yet would probably be worthwhile)
A thing I’m still somewhat fuzzy on is whether you think this court ruling is about “on LessWrong / in truth-focused contexts/communities”, “worldwide in all contexts” (or something in-between), and insofar as you think it’s “worldwide in all contexts”, if you think this has the same degree of crisp “there’s a mathematical right answer here” or “this is the correct tradeoff to make given a messy world.”
I think one of your central points of contention here is that the bolded sentence is false:
...and Scott should retract it and we should all agree he should retract it (and that refusal to do this makes for a sort of epistemic black hole that ripples outward in a contagious lies sort of way, which has pretty bad consequences on our collective mapmaking as well as just being factually-wrong in isolation)
(I agree with the above claim. I think both rationalists and broader society should get called out on claims like this)
One more claim I’m pretty sure you’re making that I agree with is “it’s important to preserve a category for ‘actual women’ that you can still talk about.”
I’m not sure if you also think:
self-described-rationalists shouldn’t be willing to accept the social category of women including not-very-successfully-transitioned transwomen because of reasoning like “there are rules of rationality suggesting this has epistemic costs, but the social benefits outweigh those costs.”
broader society shouldn’t have that social category.
I’m separating out “rationalist society” and “broader society” because they seem different to me. Rationalists have spec’d into prioritizing truth/mapmaking. I’m confidently glad someone is doing that. I’d confidently argue broader society should be more into truth/mapmaking but I’m not sure what steps I’d take in what order to achieve that, and what tradeoffs to make along the way given various messy realities.
(Or: since “rationalist” is a made up label and there’s nothing intrinsically wrong with being “more truthseeking-focused without making it your Primary Deal”, a better phrasing is “if you are trying to be truthseeking focused and you haven’t noticed or cared that you subverted a rationality principle for political expedience, you should be pretty concerned that you are doing this in other domains, or that it has more contagious-lie effects than you’re acknowledging. And other people should be suspicious of you and not grant you as much credibility as a “rationalist”. You should at the very least notice the edges of your truthseeking competence)
There’s some other specific claims I’m not sure if you’re making but I’ll leave it there for now.
Thanks for your patience. For the most part, I try to be reluctant to issue proclamations about what other people “should” do, as I explain in “I Don’t Do Policy”. (“Should” claims can be decomposed into conditional predictions about consequences given actions, and preferences over consequences. The conditional predictions can be evaluated on their merits, and I don’t control other people’s preferences.) In particular, I don’t think there’s One True social gender convention.
The “court ruling” thing was an unusual case where, you know, I had been under the impression that this subculture with Eliezer Yudkowsky as its acknowledged leader was serious about being “spec’d into prioritizing truth/mapmaking”. It really seemed like the kind of thing that shouldn’t be hard to clear up—that he would perceive an interest in clearing up, after the problem had been explained.
My position at the time was, “Scott should retract it and we should all agree he should retract it”. Since that didn’t happen despite absurd efforts, my conclusion is more that there is no “we”. It would be nice if there were a subculture that had spec’d into prioritzing truth/mapmaking, but our little cult/patronage-network doesn’t deserve credit for that: on the margin, I’m now more interested in efforts to break people of the delusion that the ingroup has a monopoly on Truth and Goodness, than I am in efforts to get the ingroup to live up to its marketing material about being the place in the world for people who are interested in Truth and Goodness.
Obviously, this doesn’t entail giving up on Truth and Goodness! (“Thinking” and “writing” and “helping people” were not invented here.) It doesn’t even entail cutting ties with “the community.” (I am, actually, still using this website, at least for now.) It does involve propagating the implication from “‘rationalist’ is a made up label” (as you say) to applying the same standards of moral reasoning to the ingroup’s cult/patronage-network and broader Society (and not confusing the former thing for “rationalist Society”).
Because you know, maybe the “community” model was just never a good idea to begin with? I’m worried that if I were to accept your support in enforcing norms about the category-boundary thing, then the elephant in my brain might think I therefore owed you a favor and should stop giving you such a hard time about the free-speech-for-disagreeable-people thing. As the post notes, that’s not how it works. Or maybe it is how human communities work, but it’s not how the ideal justice of systematically correct reasoning works.
Nod, thanks.
Okay, rewriting this to check my understanding, you’re saying:
In a rationalist community that was actively pretty successful at being good at mapmaking, more people would have proactively noticed that that particular line of Scott’s was false. The fact that this didn’t happen is evidence about the state of how much one should trust (and have trusted) the rationality community to live up to its marketing material.
But, rather than being primarily interested in actually prosecuting that case, at this point you think it’s more important to drive home the point “we’re already living in the world where the community failed to notice that this was a rationality test and they failed”, and that this has implications for how people should think of “the rationality community” (or lack thereof)
I didn’t quite understand the second clause until you spelled it out just now, thanks.
Overall I am still more focused on “actual live up to the marketing hype” because, well, I actually think we… just need good enough epistemics to handle high stakes decisions with unclear technical underpinnings and political motivations. I’d want to get the Real Thing whether or not I previously believed that we had it.
(I’m not sure whether this is particularly different from your model here? I guess the major diff in-this-domain is that I’m still more optimistic about solutions mediated through “having a community with shared norms”, and you think that’s sufficiently likely to be net-negative by default that one should be pretty skeptical about that by default?)
...
Mostly separate point:
I agree somewhat directionally with the second point, but don’t think it’s as big a deal as you think. I think you’re basically right that you were gaslit to some degree by people with politically-motivated-cognition. But, some other considerations at play are:
It’s in fact non-obvious what the right answer is (it seems like Eliezer or Scott shouldn’t get this wrong or take much convincing, but, see below for additional problems, and I think this was at least a nontrivial part of my own confusion earlier on. Your blog posts that focused on the math helped me)
It sounded like it was in some cases unclear to people what question you were trying to argue (based on your description of some arguments with Scott and Kelsey) .i.e. “self-identity isn’t a good way to define male/female” vs “it matters in the first place that words mean things, and that people doing ‘shared mapmaking’ should at least notice and care about tradeoffs re: mapmakability.
(this includes multi-level frame-mismatches, i.e. the thing where you “don’t do policy” is actually surprising and non-obvious to people)
I think you in fact have multiple goals or claims, and even if the “Scott you should retract this one sentence” was the most important one originally, I think people were correct to not believe that was your only goal, and to be suspicious of your framing?
People have different experiences of how “thought policed” they perceive themselves as being, which changes their initial guesses of how bad the problem you’re trying to point at are. (i.e. I recently had an argument with someone about why they should care about your deal, and was like ‘Zack doesn’t like people telling him what to think, especially when they are telling him how to think wrong’, and Bob responded ‘but nobody is telling him how to think!’ and I said ’oh holy hell they are absolutely telling him how to think. I can think of at least one concrete FB argument where one rationalist-you-know was specifically upset at another rationalist-you-know for not thinking of them as male, not merely what words they used. Bob was surprised, and updated).
And then a last point (which’d probably sound rude of me to bring up to most people, but seems important here), is, well, at least since 2018, you’ve had a pretty strong vibe of “seeming to relate to this in a zealous and unhealthy way”. It’s hard to tell the difference between “people are avoiding the real conversation due to leftist political reasons” and “people are avoiding the conversation because you just seem… a little crazy”. It’s especially hard to tell the difference when both are true at the same time.
All five points (i.e. those four bullets + the sort of political mindkilledness you seem to be primarily hypothesizing) have different mechanisms. But they overlap and blur together and it’s hard to tell which are most significant. I think you’d probably agree all 4 are relevant, but maybe attribute > 60% of the causality to the “political mindkilled and/or political expedience” aspect, where I think that’s… maybe like 30-45% of it?”. Which is still a lot, but, being less than 50% of the causality changes my relationship with it?)
...
(I didn’t quite find a place to link to it inline but this whole thing is one of the reasons I wrote Norm Innovation and Theory of Mind)
There never was.