you’re pushing more for an abstract principle than a concrete change
I mean, the abstract principle that matters is of the kind that can be proved as a theorem rather than merely “pushed for.” If a lawful physical process results in the states of physical system A becoming correlated with the states of system B, and likewise system B and system C, then observations of the state of system C are evidence about the state of system A. I’m claiming this as technical knowledge, not a handwaved philosophical intuition; I can write literal computer programs that exhibit this kind of evidential-entanglement relationship.
Notably, the process whereby you can use your observations about C to help make better predictions about A doesn’t work if system B is lying to make itself look good. I again claim this as technical knowledge, and not a political position.
Any time that mood seems to cropping up or underlying someone’s decision procedure it should be pushed back against.
The word “should” definitely doesn’t belong here. Like, that’s definitely a fair description of the push I’m making. Because I actually feel that way. But obviously, other people shouldn’t passionately advocate for open and honest discourse if they’re not actually passionate about open and honest discourse: that would be dishonest!
I think I’d have an easier time interacting with this if I understood better what exact actions policies you’re pushing for.
I mean, you don’t have to interact with it if you don’t feel like it! I’m not the boss of anyone!
But obviously, other people shouldn’t passionately advocate for open and honest discourse if they’re not actually passionate about open and honest discourse: that would be dishonest!
The unpacked “should” I imagined you implying was more like “If you do not feel it is important to have open/honest discourse, you are probably making a mistake. i.e. it’s likely that you’re not noticing the damage you’re doing and if you really reflected on it honestly you’d probably ”
Notably, the process whereby you can use your observations about C to help make better predictions about A doesn’t work if system B is lying to make itself look good. I again claim this as technical knowledge, and not a political position.
That part is technical knowledge (and so is the related “the observation process doesn’t work [well] if system B is systematically distorting things in some way, whether intentional or not.”). And I definitely agree with that part and expect Eli does to and generally don’t think it’s where the disagreement lives.
But, you seem to have strongly implied, if not outright stated, that this isn’t just an interesting technical fact that exists in isolation, it implies an optimal (or at least improved) policy that individuals and groups make make to improve their truthseeking capability. This implies we (at least, rationalists with roughly similar background assumptions as you) should be doing something differently than they currently are doing. And, like, it actually matters what that thing is.
There is some fact of the matter about what sorts of interacting systems can make the best predictions and models.
There is a (I suspect different) fact of the matter of what the optimal systems you can implement on humans look like, and yet another quite different fact of the matter of what improvements are possible on LessWrong-in-particular given our starting conditions, and what is the best way to coordinate on them. They certainly don’t seem like they’re going to come about by accident.
There is a fact of the matter of what happens if you push for “thick skin” and saying what you mean without regard for politeness – maybe it results in a community that converges on truth faster (by some combination of distorting less when you speak, or by spending less effort on communication or listening). Or maybe it results in a community that converges on truth slower because it selected more for people who are conflict-prone than people who are smart. I don’t actually know the answer here, and the answer seems quite important.
Early LessWrong had a flaw (IMO) regarding instrumental rationality – there is also a fact of the matter of what an optimal AI decisionmaker would do if they were running on a human-brain worth of compute. But, this is quite different from what kind of decisionmaking works best implemented on typical human wetware, and failure to understand this resulted in a lot of people making bad plans and getting depressed because the plans they made were actually impossible to run.
I mean, you don’t have to interact with it if you don’t feel like it! I’m not the boss of anyone!
Sure, but, like, I want to interact with it (both individually and as a site moderator) because I think it’s pointing in an important direction. You’ve noted this as a something I should probably pay special attention to. And, like, I think you’re right, so I’m trying to pay special attention to it.
The word “should” definitely doesn’t belong here. Like, that’s definitely a fair description of the push I’m making. Because I actually feel that way. But obviously, other people shouldn’t passionately advocate for open and honest discourse if they’re not actually passionate about open and honest discourse: that would be dishonest!
This seems to me like you’re saying “people shouldn’t have to advocate for being open and honest because people should be open and honest”
And then the question becomes… If you think it’s true that people should be open and honest, do you have policy proposals that help that become true?
I separated out the question of “stuff individuals should do unilaterally” from “norm enforcement” because it seems like at least some stuff doesn’t require any central decision nodes.
In particular, while “don’t lie” is an easy injunction to follow, “account for systematic distortions in what you say” is actually quite computationally hard, because there are a lot of distortions with different mechanisms and different places one might intervene on their thought process and/or communication process. “Publicly say literally ever inconvenient thing you think of” probably isn’t what you meant (or maybe it was?), and it might cause you to end not having a harder time thinking inconvenient thoughts.
I’m asking because I’m actually interested in improving on this dimension.
(some current best guesses of mine are, at least for my own values, are:
“Practice noticing heretical thoughts I think and actually notice what things you can’t say, without obligating yourself to say them, so that you don’t accidentally train yourself not to think them”
“Practice noticing opportunities to exhibit social courage, either in low stakes situations, or important situations. Allocate some additional attention towards practicing social courage as skill/muscle” (it’s unclear to me how much to prioritize this, because there’s two separate potential models of ‘social/epistemic courage is a muscle’ and ’social/epistemic courage is a resource you can spend, but you risk using up people’s willingness to listen to you, as well a “most things one might be courageous about actually aren’t important and you’ll end up spending a lot of effort on things that don’t matter”)
But, I am interested in what you actually do within your own frame/value setup.
I’m more interested, as the person who has been the powerful central decision node at multiple times in my life, and will likely be in the future (and as someone who is interested in institution design in general) in if you have suggestions for how to make this work in new or existing institutions. For instance, some of the ideas I’ve shared elsewhere on radical transparency norms seem one way to go about this.
I think cultural evolution and the marketplace of ideas seems like a good idea, but memetics unfortunately select for other things than just truth, and relying on memetics to propagate truth norms (if indeed the propagation of truth norms is good) feels insufficient.
I mean, the abstract principle that matters is of the kind that can be proved as a theorem rather than merely “pushed for.” If a lawful physical process results in the states of physical system A becoming correlated with the states of system B, and likewise system B and system C, then observations of the state of system C are evidence about the state of system A. I’m claiming this as technical knowledge, not a handwaved philosophical intuition; I can write literal computer programs that exhibit this kind of evidential-entanglement relationship.
Notably, the process whereby you can use your observations about C to help make better predictions about A doesn’t work if system B is lying to make itself look good. I again claim this as technical knowledge, and not a political position.
The word “should” definitely doesn’t belong here. Like, that’s definitely a fair description of the push I’m making. Because I actually feel that way. But obviously, other people shouldn’t passionately advocate for open and honest discourse if they’re not actually passionate about open and honest discourse: that would be dishonest!
I mean, you don’t have to interact with it if you don’t feel like it! I’m not the boss of anyone!
The unpacked “should” I imagined you implying was more like “If you do not feel it is important to have open/honest discourse, you are probably making a mistake. i.e. it’s likely that you’re not noticing the damage you’re doing and if you really reflected on it honestly you’d probably ”
That part is technical knowledge (and so is the related “the observation process doesn’t work [well] if system B is systematically distorting things in some way, whether intentional or not.”). And I definitely agree with that part and expect Eli does to and generally don’t think it’s where the disagreement lives.
But, you seem to have strongly implied, if not outright stated, that this isn’t just an interesting technical fact that exists in isolation, it implies an optimal (or at least improved) policy that individuals and groups make make to improve their truthseeking capability. This implies we (at least, rationalists with roughly similar background assumptions as you) should be doing something differently than they currently are doing. And, like, it actually matters what that thing is.
There is some fact of the matter about what sorts of interacting systems can make the best predictions and models.
There is a (I suspect different) fact of the matter of what the optimal systems you can implement on humans look like, and yet another quite different fact of the matter of what improvements are possible on LessWrong-in-particular given our starting conditions, and what is the best way to coordinate on them. They certainly don’t seem like they’re going to come about by accident.
There is a fact of the matter of what happens if you push for “thick skin” and saying what you mean without regard for politeness – maybe it results in a community that converges on truth faster (by some combination of distorting less when you speak, or by spending less effort on communication or listening). Or maybe it results in a community that converges on truth slower because it selected more for people who are conflict-prone than people who are smart. I don’t actually know the answer here, and the answer seems quite important.
Early LessWrong had a flaw (IMO) regarding instrumental rationality – there is also a fact of the matter of what an optimal AI decisionmaker would do if they were running on a human-brain worth of compute. But, this is quite different from what kind of decisionmaking works best implemented on typical human wetware, and failure to understand this resulted in a lot of people making bad plans and getting depressed because the plans they made were actually impossible to run.
Sure, but, like, I want to interact with it (both individually and as a site moderator) because I think it’s pointing in an important direction. You’ve noted this as a something I should probably pay special attention to. And, like, I think you’re right, so I’m trying to pay special attention to it.
This seems to me like you’re saying “people shouldn’t have to advocate for being open and honest because people should be open and honest”
And then the question becomes… If you think it’s true that people should be open and honest, do you have policy proposals that help that become true?
Not really? The concept of a “policy proposal” seems to presuppose control over some powerful central decision node, which I don’t think is true of me. This is a forum website. I write things. Maybe someone reads them. Maybe they learn something. Maybe me and the people who are better at open and honest discourse preferentially collaborate with each other (and ignore people who we can detect are playing a different game), have systematically better ideas, and newcomers tend to imitate our ways in a process of cultural evolution.
I separated out the question of “stuff individuals should do unilaterally” from “norm enforcement” because it seems like at least some stuff doesn’t require any central decision nodes.
In particular, while “don’t lie” is an easy injunction to follow, “account for systematic distortions in what you say” is actually quite computationally hard, because there are a lot of distortions with different mechanisms and different places one might intervene on their thought process and/or communication process. “Publicly say literally ever inconvenient thing you think of” probably isn’t what you meant (or maybe it was?), and it might cause you to end not having a harder time thinking inconvenient thoughts.
I’m asking because I’m actually interested in improving on this dimension.
(some current best guesses of mine are, at least for my own values, are:
“Practice noticing heretical thoughts I think and actually notice what things you can’t say, without obligating yourself to say them, so that you don’t accidentally train yourself not to think them”
“Practice noticing opportunities to exhibit social courage, either in low stakes situations, or important situations. Allocate some additional attention towards practicing social courage as skill/muscle” (it’s unclear to me how much to prioritize this, because there’s two separate potential models of ‘social/epistemic courage is a muscle’ and ’social/epistemic courage is a resource you can spend, but you risk using up people’s willingness to listen to you, as well a “most things one might be courageous about actually aren’t important and you’ll end up spending a lot of effort on things that don’t matter”)
But, I am interested in what you actually do within your own frame/value setup.
I’m more interested, as the person who has been the powerful central decision node at multiple times in my life, and will likely be in the future (and as someone who is interested in institution design in general) in if you have suggestions for how to make this work in new or existing institutions. For instance, some of the ideas I’ve shared elsewhere on radical transparency norms seem one way to go about this.
I think cultural evolution and the marketplace of ideas seems like a good idea, but memetics unfortunately select for other things than just truth, and relying on memetics to propagate truth norms (if indeed the propagation of truth norms is good) feels insufficient.