Okay, rewriting this to check my understanding, you’re saying:
In a rationalist community that was actively pretty successful at being good at mapmaking, more people would have proactively noticed that that particular line of Scott’s was false. The fact that this didn’t happen is evidence about the state of how much one should trust (and have trusted) the rationality community to live up to its marketing material.
But, rather than being primarily interested in actually prosecuting that case, at this point you think it’s more important to drive home the point “we’re already living in the world where the community failed to notice that this was a rationality test and they failed”, and that this has implications for how people should think of “the rationality community” (or lack thereof)
I didn’t quite understand the second clause until you spelled it out just now, thanks.
(I’m not sure whether this is particularly different from your model here? I guess the major diff in-this-domain is that I’m still more optimistic about solutions mediated through “having a community with shared norms”, and you think that’s sufficiently likely to be net-negative by default that one should be pretty skeptical about that by default?)
...
Mostly separate point:
I agree somewhat directionally with the second point, but don’t think it’s as big a deal as you think. I think you’re basically right that you were gaslit to some degree by people with politically-motivated-cognition. But, some other considerations at play are:
It’s in fact non-obvious what the right answer is (it seems like Eliezer or Scott shouldn’t get this wrong or take much convincing, but, see below for additional problems, and I think this was at least a nontrivial part of my own confusion earlier on. Your blog posts that focused on the math helped me)
It sounded like it was in some cases unclear to people what question you were trying to argue (based on your description of some arguments with Scott and Kelsey) .i.e. “self-identity isn’t a good way to define male/female” vs “it matters in the first place that words mean things, and that people doing ‘shared mapmaking’ should at least notice and care about tradeoffs re: mapmakability.
(this includes multi-level frame-mismatches, i.e. the thing where you “don’t do policy” is actually surprising and non-obvious to people)
I think you in fact have multiple goals or claims, and even if the “Scott you should retract this one sentence” was the most important one originally, I think people were correct to not believe that was your only goal, and to be suspicious of your framing?
People have different experiences of how “thought policed” they perceive themselves as being, which changes their initial guesses of how bad the problem you’re trying to point at are. (i.e. I recently had an argument with someone about why they should care about your deal, and was like ‘Zack doesn’t like people telling him what to think, especially when they are telling him how to think wrong’, and Bob responded ‘but nobody is telling him how to think!’ and I said ’oh holy hell they are absolutely telling him how to think. I can think of at least one concrete FB argument where one rationalist-you-know was specifically upset at another rationalist-you-know for not thinking of them as male, not merely what words they used. Bob was surprised, and updated).
And then a last point (which’d probably sound rude of me to bring up to most people, but seems important here), is, well, at least since 2018, you’ve had a pretty strong vibe of “seeming to relate to this in a zealous and unhealthy way”. It’s hard to tell the difference between “people are avoiding the real conversation due to leftist political reasons” and “people are avoiding the conversation because you just seem… a little crazy”. It’s especially hard to tell the difference when both are true at the same time.
All five points (i.e. those four bullets + the sort of political mindkilledness you seem to be primarily hypothesizing) have different mechanisms. But they overlap and blur together and it’s hard to tell which are most significant. I think you’d probably agree all 4 are relevant, but maybe attribute > 60% of the causality to the “political mindkilled and/or political expedience” aspect, where I think that’s… maybe like 30-45% of it?”. Which is still a lot, but, being less than 50% of the causality changes my relationship with it?)
...
(I didn’t quite find a place to link to it inline but this whole thing is one of the reasons I wrote Norm Innovation and Theory of Mind)
Nod, thanks.
Okay, rewriting this to check my understanding, you’re saying:
In a rationalist community that was actively pretty successful at being good at mapmaking, more people would have proactively noticed that that particular line of Scott’s was false. The fact that this didn’t happen is evidence about the state of how much one should trust (and have trusted) the rationality community to live up to its marketing material.
But, rather than being primarily interested in actually prosecuting that case, at this point you think it’s more important to drive home the point “we’re already living in the world where the community failed to notice that this was a rationality test and they failed”, and that this has implications for how people should think of “the rationality community” (or lack thereof)
I didn’t quite understand the second clause until you spelled it out just now, thanks.
Overall I am still more focused on “actual live up to the marketing hype” because, well, I actually think we… just need good enough epistemics to handle high stakes decisions with unclear technical underpinnings and political motivations. I’d want to get the Real Thing whether or not I previously believed that we had it.
(I’m not sure whether this is particularly different from your model here? I guess the major diff in-this-domain is that I’m still more optimistic about solutions mediated through “having a community with shared norms”, and you think that’s sufficiently likely to be net-negative by default that one should be pretty skeptical about that by default?)
...
Mostly separate point:
I agree somewhat directionally with the second point, but don’t think it’s as big a deal as you think. I think you’re basically right that you were gaslit to some degree by people with politically-motivated-cognition. But, some other considerations at play are:
It’s in fact non-obvious what the right answer is (it seems like Eliezer or Scott shouldn’t get this wrong or take much convincing, but, see below for additional problems, and I think this was at least a nontrivial part of my own confusion earlier on. Your blog posts that focused on the math helped me)
It sounded like it was in some cases unclear to people what question you were trying to argue (based on your description of some arguments with Scott and Kelsey) .i.e. “self-identity isn’t a good way to define male/female” vs “it matters in the first place that words mean things, and that people doing ‘shared mapmaking’ should at least notice and care about tradeoffs re: mapmakability.
(this includes multi-level frame-mismatches, i.e. the thing where you “don’t do policy” is actually surprising and non-obvious to people)
I think you in fact have multiple goals or claims, and even if the “Scott you should retract this one sentence” was the most important one originally, I think people were correct to not believe that was your only goal, and to be suspicious of your framing?
People have different experiences of how “thought policed” they perceive themselves as being, which changes their initial guesses of how bad the problem you’re trying to point at are. (i.e. I recently had an argument with someone about why they should care about your deal, and was like ‘Zack doesn’t like people telling him what to think, especially when they are telling him how to think wrong’, and Bob responded ‘but nobody is telling him how to think!’ and I said ’oh holy hell they are absolutely telling him how to think. I can think of at least one concrete FB argument where one rationalist-you-know was specifically upset at another rationalist-you-know for not thinking of them as male, not merely what words they used. Bob was surprised, and updated).
And then a last point (which’d probably sound rude of me to bring up to most people, but seems important here), is, well, at least since 2018, you’ve had a pretty strong vibe of “seeming to relate to this in a zealous and unhealthy way”. It’s hard to tell the difference between “people are avoiding the real conversation due to leftist political reasons” and “people are avoiding the conversation because you just seem… a little crazy”. It’s especially hard to tell the difference when both are true at the same time.
All five points (i.e. those four bullets + the sort of political mindkilledness you seem to be primarily hypothesizing) have different mechanisms. But they overlap and blur together and it’s hard to tell which are most significant. I think you’d probably agree all 4 are relevant, but maybe attribute > 60% of the causality to the “political mindkilled and/or political expedience” aspect, where I think that’s… maybe like 30-45% of it?”. Which is still a lot, but, being less than 50% of the causality changes my relationship with it?)
...
(I didn’t quite find a place to link to it inline but this whole thing is one of the reasons I wrote Norm Innovation and Theory of Mind)