There is a part of Sequences which I am too lazy to find now, which goes approximately like this: “If you make five maps of the same city, and you make those maps correctly, then the maps should be the same. So if you make five maps of the same city, and you find differences between them (for example, some streets A and B intersect on one map, but run parallel on another map), it means that you made a mistake somewhere, and the maps are not as good as you wish them to be. Nonetheless, you cannot fix this mistake by merely adjusting some of the maps to fit the other ones. The sameness of the maps is a desired outcome… but it must happen naturally, as a result of all maps correctly representing the same city… not artificially, as a result of adjusting the maps to fit each other.”
I get a similar feeling from some of your posts (including this one, also the punch bug). It seems to me that you care a lot about being right; and that is a good thing, and it’s kinda what this community is trying to be about. And you seem strongly frustrated by people coming to conclusions dramatically different from yours; which indeed means that someone is wrong about something. And I agree that this is frustrating, and it is something that wouldn’t happen if we succeeded at our stated goal.
But… it happens. And I worry that if we push against this, we may be “Goodharting” our search for truth. The common agreement should happen as a result of everyone examining the facts carefully and rationally. Not as a result of peer pressure that coming to the common agreement is what rationalists should do. Like, they “should” in the sense that “this is what would automatically happen to perfectly rational agents, per Aumann’s theorem”, but not “should” in the sense that “they should be directly trying to achieve this”.
So, disagreeing with a specific thing seems okay to me. (“You guys said X, but I am convinced that non-X, here is my evidence.”) But this kind of meta commentary feels to me like pressure to do the wrong thing. (“You guys disagree with me on X, Y, Z. As aspiring rationalists, we should not be regularly disagreeing on so many things.”) Even if I agree that disagreeing on too many things too often is evidence that something went wrong. Still, the only correct way to agree on X, Y, Z, is to separately discuss X and come to a conclusion, discuss Y and come to a conclusion, and discuss Z and come to a conclusion. Not some bulk update like: “shit, it’s really bad to disagree on so many things, and Duncan seems to have a lot of support so I guess the wisdom of the crowd is on his side, so from now on I am going to automatically switch my opinion on everything to whatever Duncan says”. Convince me by arguments, not by meta-arguments about how disagreement is wrong.
And this is not intended as a defense of any specific things I said that you might disagree with. I am just a stupid human, with limited time and attention, often posting past midnight when I should be sleeping instead. It is plausible that I am wrong at a horribly large amount of things. Sorry for that. But there is also a chance that you might be wrong about some things, or maybe we just misunderstand each other, so my updating to your (perceived) position might also be a mistake. I am also unhappy about us not being better synchronized in our perspectives, but I see it as an inevitable consequence of our imperfections, and I hope it gets better over time… but I am not going to force it.
If you convince me about specific mistakes, I hope I have the ability to change my mind. If you convince me about an embarassing number of mistakes, I might even conclude that it is better for humanity if I simply stop writing, at least until my game improves significantly. But the meta-argument itself is not actionable for me (other than the general exhortation to be more careful, which I believe I am already trying).
My sense is that I am disagreeing with (a set of) specific things.
The bulk update that I’m pushing for is not “switch my opinion to everything Duncan says,” but “start looking for ways to make the smaller, each-nameable-in-its-own-right slips in rationality happen less often.”
I don’t think I’m making a meta-argument about disagreement being wrong, except insofar as I’m asserting a belief that LessWrong ought to be for a specific thing, and that, in the case where there is consensus about that thing, other things should be deprioritized. I’m not even claiming that I’m definitely right about the thing LW ought to be for! But if it’s about that thing, or chooses to become so, then it needs to be less about the other thing.
If we had a consensus about “this comment is more rational, and that comment is less rational”, then reminding people to upvote the rational comments and downvote the irrational comments might result in karma scores that everyone would agree with.
(Modulo the fact already mentioned somewhere in this discussion that some comments are seen by more people than other comments, which would still result in more karma for the same degree of rationality.)
(Plus some other issues, such as: what if someone writes a comment containing one rational and one irrational paragraph; should we penalize needlessly long or hard-to-read comments; what if the comment is not quite good but contains a rare and important idea; etc.)
Thing is, I don’t believe we have this consensus. Some comments are obviously rational, some are obviously irrational, but there are many where different people have a different opinion.
Technically, this can be measured. Like, find a person you believe to be so rational that you are satisfied with their level of rationality, who comments and votes on LW. Then find a long thread where you both voted, and check how many comments you upvoted/downvoted/ignored the same, and how many times you disagreed (not just upvote vs downvote, but also e.g. upvote vs no vote). My guess is that you overestimate how much your votes would match.
My understanding of your complaint is that people are often voting on comments regardless of their rationality. Which certainly happens. But in a parallel reality where all of us consistently tried our best to really only vote for good arguments… I think you would assume much greater consensus in votes than I would.
Rationality doesn’t make sense as a property of comments. It’s a quality of cognitive skills that work well (and might generate comments). Any judgement of comments according to rationality of algorithms that generated them is an ad hominem equivocation, the comments screen off the algorithms that generated them.
I think that you’re correct to point at a potential trap that people might slip into, of confusing the qualities of a comment with the properties of the algorithm that generated it. I think this is a thing people do, in fact, do, and it’s a projection, and it’s an often-wrong projection.
But I also think that there’s a straightforward thing that people mean by “this comment is more rational than that one,” and I think it’s a valid use of the word rational in the sense that 70+ out of 100 people would interpret it as meaning what the speaker actually intended.
Something like:
This is more careful with its inferences than that
This is more justified in its conclusions than that
This is more self-aware about the ways in which it might be skewed or off than that
This is more transparent and legible than that
This causes me to have an easier time thinking and seeing clearly than that
… and I think “thinking about how to reliably distinguish between [this] and [that] is a worthwhile activity, and a line of inquiry that’s likely to lead to promising ideas for improving the site and the community.”
I’m specifically boosting the prescriptivist point about not using the word “rational” in an inflationary way that doesn’t make literal sense. Comments can be valid, explicit on their own epistemic status, true, relevant to their intended context, not making well-known mistakes, and so on and so forth, but they can’t be rational, for the reason I gave, in the sense of “rational” as a property of cognitive algorithms.
I think this is a mistake
Incidentally, I like the distinction between error and mistake from linguistics, where an error is systematic or deliberatively endorsed behavior, while a mistake is intermittent behavior that’s not deliberatively endorsed. That would have my comment make an error, not a mistake.
In part, that’s why several of my suggestions depended on a small number of relatively concrete observables (like distinguishing inference from observation).
But also, I think that a substantial driver of the lack of consensus/spread of opinion lies in the fact that the population of LessWrong today, in my best estimation, contains a lot of people who “ought not to be here,” not in the sense that they’re bad or wrong or anything, but in the sense that a gym ought mostly only contain people interested in doing physical activity and a library ought mostly only contain people interested in looking at books. There is some number of non-central or non-bought-in members that a given population can sustain, and right now I think LessWrong is holding more than it can handle.
I think a tighter population would still lack consensus in the way you highlight, but less so.
FWIW, I’m someone who believes myself to have the occasional useful contribution on LW, but I also have an intuitive sense of being “dangerously non-central” here, with the first word of that expanding to something like “likely to be welcomed anyway, but in a way which would do more collateral damage to community alignment (via dilution) than is broadly recognized in a way that people are willing to act on”. I apply a significant amount of secondary self-restraint on those grounds to what I post, possibly not enough (though my thoughts about what an actually appropriate strategy would be to apply here are too muddled to say that with confidence), and my emotional sense endorses my use of this restraint (in particular, it doesn’t cause noticeable feelings of hostility or rejection in either direction).
I’m saying this out loud partly in case anyone else who’s had similar first-person experiences would otherwise feel awkward about describing them here and therefore result in a cluster of evidence being missing; I don’t know how large that group would be.
There is a part of Sequences which I am too lazy to find now, which goes approximately like this: “If you make five maps of the same city, and you make those maps correctly, then the maps should be the same. So if you make five maps of the same city, and you find differences between them (for example, some streets A and B intersect on one map, but run parallel on another map), it means that you made a mistake somewhere, and the maps are not as good as you wish them to be. Nonetheless, you cannot fix this mistake by merely adjusting some of the maps to fit the other ones. The sameness of the maps is a desired outcome… but it must happen naturally, as a result of all maps correctly representing the same city… not artificially, as a result of adjusting the maps to fit each other.”
I get a similar feeling from some of your posts (including this one, also the punch bug). It seems to me that you care a lot about being right; and that is a good thing, and it’s kinda what this community is trying to be about. And you seem strongly frustrated by people coming to conclusions dramatically different from yours; which indeed means that someone is wrong about something. And I agree that this is frustrating, and it is something that wouldn’t happen if we succeeded at our stated goal.
But… it happens. And I worry that if we push against this, we may be “Goodharting” our search for truth. The common agreement should happen as a result of everyone examining the facts carefully and rationally. Not as a result of peer pressure that coming to the common agreement is what rationalists should do. Like, they “should” in the sense that “this is what would automatically happen to perfectly rational agents, per Aumann’s theorem”, but not “should” in the sense that “they should be directly trying to achieve this”.
So, disagreeing with a specific thing seems okay to me. (“You guys said X, but I am convinced that non-X, here is my evidence.”) But this kind of meta commentary feels to me like pressure to do the wrong thing. (“You guys disagree with me on X, Y, Z. As aspiring rationalists, we should not be regularly disagreeing on so many things.”) Even if I agree that disagreeing on too many things too often is evidence that something went wrong. Still, the only correct way to agree on X, Y, Z, is to separately discuss X and come to a conclusion, discuss Y and come to a conclusion, and discuss Z and come to a conclusion. Not some bulk update like: “shit, it’s really bad to disagree on so many things, and Duncan seems to have a lot of support so I guess the wisdom of the crowd is on his side, so from now on I am going to automatically switch my opinion on everything to whatever Duncan says”. Convince me by arguments, not by meta-arguments about how disagreement is wrong.
And this is not intended as a defense of any specific things I said that you might disagree with. I am just a stupid human, with limited time and attention, often posting past midnight when I should be sleeping instead. It is plausible that I am wrong at a horribly large amount of things. Sorry for that. But there is also a chance that you might be wrong about some things, or maybe we just misunderstand each other, so my updating to your (perceived) position might also be a mistake. I am also unhappy about us not being better synchronized in our perspectives, but I see it as an inevitable consequence of our imperfections, and I hope it gets better over time… but I am not going to force it.
If you convince me about specific mistakes, I hope I have the ability to change my mind. If you convince me about an embarassing number of mistakes, I might even conclude that it is better for humanity if I simply stop writing, at least until my game improves significantly. But the meta-argument itself is not actionable for me (other than the general exhortation to be more careful, which I believe I am already trying).
My sense is that I am disagreeing with (a set of) specific things.
The bulk update that I’m pushing for is not “switch my opinion to everything Duncan says,” but “start looking for ways to make the smaller, each-nameable-in-its-own-right slips in rationality happen less often.”
I don’t think I’m making a meta-argument about disagreement being wrong, except insofar as I’m asserting a belief that LessWrong ought to be for a specific thing, and that, in the case where there is consensus about that thing, other things should be deprioritized. I’m not even claiming that I’m definitely right about the thing LW ought to be for! But if it’s about that thing, or chooses to become so, then it needs to be less about the other thing.
If we had a consensus about “this comment is more rational, and that comment is less rational”, then reminding people to upvote the rational comments and downvote the irrational comments might result in karma scores that everyone would agree with.
(Modulo the fact already mentioned somewhere in this discussion that some comments are seen by more people than other comments, which would still result in more karma for the same degree of rationality.)
(Plus some other issues, such as: what if someone writes a comment containing one rational and one irrational paragraph; should we penalize needlessly long or hard-to-read comments; what if the comment is not quite good but contains a rare and important idea; etc.)
Thing is, I don’t believe we have this consensus. Some comments are obviously rational, some are obviously irrational, but there are many where different people have a different opinion.
Technically, this can be measured. Like, find a person you believe to be so rational that you are satisfied with their level of rationality, who comments and votes on LW. Then find a long thread where you both voted, and check how many comments you upvoted/downvoted/ignored the same, and how many times you disagreed (not just upvote vs downvote, but also e.g. upvote vs no vote). My guess is that you overestimate how much your votes would match.
My understanding of your complaint is that people are often voting on comments regardless of their rationality. Which certainly happens. But in a parallel reality where all of us consistently tried our best to really only vote for good arguments… I think you would assume much greater consensus in votes than I would.
Rationality doesn’t make sense as a property of comments. It’s a quality of cognitive skills that work well (and might generate comments). Any judgement of comments according to rationality of algorithms that generated them is an ad hominem equivocation, the comments screen off the algorithms that generated them.
Mmm, I think this is a mistake.
I think that you’re correct to point at a potential trap that people might slip into, of confusing the qualities of a comment with the properties of the algorithm that generated it. I think this is a thing people do, in fact, do, and it’s a projection, and it’s an often-wrong projection.
But I also think that there’s a straightforward thing that people mean by “this comment is more rational than that one,” and I think it’s a valid use of the word rational in the sense that 70+ out of 100 people would interpret it as meaning what the speaker actually intended.
Something like:
This is more careful with its inferences than that
This is more justified in its conclusions than that
This is more self-aware about the ways in which it might be skewed or off than that
This is more transparent and legible than that
This causes me to have an easier time thinking and seeing clearly than that
… and I think “thinking about how to reliably distinguish between [this] and [that] is a worthwhile activity, and a line of inquiry that’s likely to lead to promising ideas for improving the site and the community.”
I’m specifically boosting the prescriptivist point about not using the word “rational” in an inflationary way that doesn’t make literal sense. Comments can be valid, explicit on their own epistemic status, true, relevant to their intended context, not making well-known mistakes, and so on and so forth, but they can’t be rational, for the reason I gave, in the sense of “rational” as a property of cognitive algorithms.
Incidentally, I like the distinction between error and mistake from linguistics, where an error is systematic or deliberatively endorsed behavior, while a mistake is intermittent behavior that’s not deliberatively endorsed. That would have my comment make an error, not a mistake.
I like it.
I agree that the consensus doesn’t exist.
In part, that’s why several of my suggestions depended on a small number of relatively concrete observables (like distinguishing inference from observation).
But also, I think that a substantial driver of the lack of consensus/spread of opinion lies in the fact that the population of LessWrong today, in my best estimation, contains a lot of people who “ought not to be here,” not in the sense that they’re bad or wrong or anything, but in the sense that a gym ought mostly only contain people interested in doing physical activity and a library ought mostly only contain people interested in looking at books. There is some number of non-central or non-bought-in members that a given population can sustain, and right now I think LessWrong is holding more than it can handle.
I think a tighter population would still lack consensus in the way you highlight, but less so.
FWIW, I’m someone who believes myself to have the occasional useful contribution on LW, but I also have an intuitive sense of being “dangerously non-central” here, with the first word of that expanding to something like “likely to be welcomed anyway, but in a way which would do more collateral damage to community alignment (via dilution) than is broadly recognized in a way that people are willing to act on”. I apply a significant amount of secondary self-restraint on those grounds to what I post, possibly not enough (though my thoughts about what an actually appropriate strategy would be to apply here are too muddled to say that with confidence), and my emotional sense endorses my use of this restraint (in particular, it doesn’t cause noticeable feelings of hostility or rejection in either direction).
I’m saying this out loud partly in case anyone else who’s had similar first-person experiences would otherwise feel awkward about describing them here and therefore result in a cluster of evidence being missing; I don’t know how large that group would be.
“My Kind of Reflection”.
Also in the modesty argument and No License To Be Human