Flagging I also disagree with this (also seems to obviously be failing rule #10).
I’m a bit confused about this, because, like, I’m sure you know that time is short, there are lots of (true) things to talk about, going infinitely deep on precisely specifying any given thing is clearly unworkable even if you pick the specific sub-hill of “be LessWrong” to die on rather than the broader hill of “maximize truthseeking.”. I assume you pick some point on the curve where you’re like “okay, practically, that was enough precision”, which is just higher than mine.
When I imagine bringing this up my Duncan-sim says “yes I know that and can pass your ITT and integrated it”, but, I don’t really know why you’re making the tradeoffs you do.
There’s a hell of a lot of stuff I want to learn, and it honestly seems anti-helpful to me, on truthseeking terms, to spend the amount you do on nuance, when there is so much other stuff I need to think about, learn, discuss and reason about.
it honestly seems anti-helpful to me, on truthseeking terms, to spend the amount you do on nuance, when there is so much other stuff I need to think about, learn, discuss and reason about.
See the linked Sapir-Whorf bit, especially Nate’s tweetstorm; I am not, in fact, “spending” effort on nuance; most of the time the nuance is genuinely effortless because I’m just straightforwardly describing the world I see and saying things that feel true.
If it feels particularly effortful, or like one is injecting nuance, then I think this usually means that one’s underlying thoughts and models aren’t nuanced (at that level).
Over and over, the actual guidelines post tries to make clear “a big important piece of this puzzle is just being open to requests that the conversation get more nuanced or more precise, as opposed to expecting to hit convergence with your partner on the first go (or tying yourself into knots trying to do so).”
If it feels particularly effortful, or like one is injecting nuance, then I think this usually means that one’s underlying thoughts and models aren’t nuanced (at that level).
Having nuanced thoughts and models is not a free action either though, so I don’t think this necessarily speaks against the marginal effectiveness point. And speaking in a nuanced way will not be effortless for your listeners if their own models don’t already possess that nuance.
See my more direct reply above; this was a very gentle experiment in trying to meet the conversational norms it seemed to me DirectedEvolution was explicitly advocating for. I feel like the results of the experiment underscore my point/are in support of my core position.
Flagging I also disagree with this (also seems to obviously be failing rule #10).
I’m a bit confused about this, because, like, I’m sure you know that time is short, there are lots of (true) things to talk about, going infinitely deep on precisely specifying any given thing is clearly unworkable even if you pick the specific sub-hill of “be LessWrong” to die on rather than the broader hill of “maximize truthseeking.”. I assume you pick some point on the curve where you’re like “okay, practically, that was enough precision”, which is just higher than mine.
When I imagine bringing this up my Duncan-sim says “yes I know that and can pass your ITT and integrated it”, but, I don’t really know why you’re making the tradeoffs you do.
There’s a hell of a lot of stuff I want to learn, and it honestly seems anti-helpful to me, on truthseeking terms, to spend the amount you do on nuance, when there is so much other stuff I need to think about, learn, discuss and reason about.
Separately:
See the linked Sapir-Whorf bit, especially Nate’s tweetstorm; I am not, in fact, “spending” effort on nuance; most of the time the nuance is genuinely effortless because I’m just straightforwardly describing the world I see and saying things that feel true.
If it feels particularly effortful, or like one is injecting nuance, then I think this usually means that one’s underlying thoughts and models aren’t nuanced (at that level).
Over and over, the actual guidelines post tries to make clear “a big important piece of this puzzle is just being open to requests that the conversation get more nuanced or more precise, as opposed to expecting to hit convergence with your partner on the first go (or tying yourself into knots trying to do so).”
Having nuanced thoughts and models is not a free action either though, so I don’t think this necessarily speaks against the marginal effectiveness point. And speaking in a nuanced way will not be effortless for your listeners if their own models don’t already possess that nuance.
See my more direct reply above; this was a very gentle experiment in trying to meet the conversational norms it seemed to me DirectedEvolution was explicitly advocating for. I feel like the results of the experiment underscore my point/are in support of my core position.