The official motto in the logo is “refining the art of human rationality”, which implies that our rationality is still imperfect.
It’s still imperfect, but can’t people try a little harder?
I don’t see why it’s absurd or bad PR to say that we’re more rational than most other communities, but still not rational enough to talk about politics.
When will we be rational enough to talk about politics (or subjects with political implications)? I am skeptical that any of the justifications for not talking about politics will ever change. Right now, we have a bunch of intelligent, rationalist people who have read at a least a smattering of Eliezer’s writings, yet who have differing experiences and perspectives on certain subjects, with a lot of inferential distance in between. We have veteran community members, and we have new members. In a few years, we will have exactly the same thing, and people will still be saying that politics is the “mind-killer.”
I have to wonder, if LW isn’t ready to talk about politics now, will we ever be ready (on our current hardware)? I am skeptical that we all can just keep exercising our rationality on non-political subjects, and then one day a bell will go ding, and suddenly a critical mass of us will be rational enough to discuss politics.
You can’t learn to discuss politics rationally merely by studying rationality in the abstract, or studying it when applied to non-political subjects. Rationality applied to politics is a particular skill that must be exercised. Biases will flare up even for intelligent, rationalist people who know better. The only way for LW to become good at discussing politics is to practice and get better.
(And even now, LW is not bad at discussing politics, and there have been many great political discussions here. While many of them have been a bit heated by the standards of LW, they are downright friendly compared to practically anywhere else.)
Unfortunately, the rest of the world doesn’t have the same level of humility about discussing political subjects. Many of the people most capable of discussing politics rationally seem to have the most humility. How long can we afford to have rationalists sit out of politics?
If you want to make political impact, don’t have discussions about politics on blogs; go do something that makes the best use of your skills. Start an organization, work on a campaign, make political issues your profession or a major personal project.
If that doesn’t sound appealing (to me, it doesn’t, but people I admire often do throw themselves into political work) then talking politics is just shooting the shit. Even if you’re very serious and rational about it, it’s pretty much recreation.
I used to really like politics as recreation—it made me feel good—but it has its downsides. One, it can take up a lot of time that you could use to build skills, get work done, or have more intense fun (a night out on the town vs. a night in on the internet.) Two, it can make you dislike people that you’d otherwise like; it screws with personal relationships. Three, there’s something that bothers me morally, a little, about using issues that are serious life-and-death problems for other people as my form of recreation. Four, in some cases, including mine, politics can hurt your personal development in a particular way: I would palliate my sense of not being a good person by reassuring myself that I had the right opinions. Now I’m trying to actually be a better person in practice, and also trying to worry less about imaginary sins; it’s work in progress, of course, but I feel I don’t need my “fix” of righteous anger as much.
This is a personal experience, of course, but I think that it’s worth it for everyone to ask, “Why do I talk politics? Do I want to talk politics?”
“If you want to make political impact, don’t have discussions about politics on blogs; go do something that makes the best use of your skills. Start an organization, work on a campaign, make political issues your profession or a major personal project.”
You omit the most important step, which comes before starting an organization. That’s figuring out what politics this organization should espouse and how it should espouse those politics.
If my views are almost diametrically opposed to Robin Hanson’s, and I have no good reason to think I’m more rational than Robin or otherwise in a better epistemic position, I’m not rationally justified in setting up an organization to espouse my views because I should consider, in that event, that my views have at least a .5 chance of being wrong, probably much higher. The worst think people can do is set up political projects based on ill-considered principles to end up advocating the wrong policies. As long as rational, informed people disagree, one isn’t entitled to a strongly held political position.
What you said might make sense if political debate were strictly about means and there was general agreement on ends. But it is not. And your views on the ends of policy are worth every bit as much as Dr. Hanson’s, however much you worry that his thinking might be better than yours concerning means.
Just to make sure there is no confusion about who stands where on the issue, I’d like to re-emphasize that I definitely don’t support making politics a prominent item on the discussion agenda of LW. What I am concerned about are topics that are on LW’s discussion agenda as presently defined, but have some implications about political and other charged issues, and the question of whether these should be avoided. (Though of course this is complicated by the fact that the present discussion agenda is somewhat vague and a matter of some disagreement.)
Why do you find it beneficial to bring up implications about political and other charged issues, when discussing topics that are on LW’s discussion agenda?
I can understand it if you’re making some point about improving rationality in general, and the best example to illustrate your point happens to be political, and you judge the benefit of using that example to be worth the cost (e.g., the risk that LW slides down the slippery slope towards politics being prominently debated, and others finding it difficult to respond to your point because they want to avoid contributing to sliding down that slippery slope).
If it’s more like “btw, here are some political implications of the idea I was talking about” then I think we should avoid those.
It could be that by far the main corruptor of rationality, which does by far the most damage however you want to measure it, is the struggle for political power. If that’s the case, then it may be unavoidable to discuss power and therefore politics.
The high point of human rationality is science, but as it happens, the scientific establishment has been so thoroughly dominated by the government (government supports much of academia, government supports much of science through grants, government passes laws which make it difficult to conduct science without official government approval, government controls the dissemination of scientific claims) that corruption of science by politics seems inevitable. If in fact science is corrupt from top to bottom (as it may be), then such corruption is almost certainly almost entirely at the hands of the state, and is therefore almost certainly political. So, if science is thoroughly corrupt, then it is almost certainly virtually impossible to discuss that corruption at all seriously without getting heavily into politics.
Why do you find it beneficial to bring up implications about political and other charged issues, when discussing topics that are on LW’s discussion agenda?
I don’t think one should bring up such implications just for the hell of it, when they contribute nothing of substance. I also agree that among otherwise equally useful examples, one should use those that are least distracting and that minimize the danger of dissension. There’s a simple cost-benefit case there, which I don’t dispute. However, it seems to me that many relevant topics are impossible to discuss without bringing up such implications.
Take for example my original post that started this discussion. For anyone who strives to be less wrong about almost anything, one of the absolutely crucial questions is what confidence should be assigned to what the academic mainstream says, and in this regard, I consider the topic of the post extremely relevant for LW. (If you believe otherwise, I would be curious to see the argument why—and note that what I’m arguing now is independent of what you might think about the quality of its content.) Now, I think nobody could dispute that on many topics the academic opinion is biased to some extent due to political and ideological influences, so it’s important to be able to recognize and evaluate such situations. Moreover, as far as I see, this represents a peculiar class of bias that cannot be adequately illustrated and discussed without bringing up some concrete examples of biases due to ideological or political influences. So, how could one possibly approach this issue while strictly avoiding the mention of anything that’s ideologically charged at least by implication?
Yet some people apparently believe that this line of inquiry already goes too far towards dangerous and undesirable topics. If this belief is correct, in the sense that maintaining a high quality of discourse really demands such a severe restriction on permissible topics, then this, in my opinion, decisively defeats the idea of having a forum like LW, under any reasonable interpretation of its mission statement, vague as it is. It effectively implies that people are inherently incapable of rational discourse unless it’s stringently disciplined and focused on a narrow range of topics, the way expert technical forums are. Because this is definitely not the only example of how charged issues will inevitably be arrived upon by people discussing the general problems of sorting out truth from bias and nonsense.
There are also other important points here, on which I’ve already elaborated in my other comments, which all stem from the same fundamental observation, namely that those topics where one needs an extraordinary level of rationality to escape bias and delusion and often exactly those that are commonly a matter of impassioned and polarized opinion. In other words, general skills in rational thinking and overcoming bias are of little use if one will stick to technical topics in which experts already have sophisticated, so to say, application-specific techniques for eliminating bias and nonsense. (Which often work well—one can easily think of brilliant scientists and technical experts with outright delusional opinions outside of their narrow specialties—and when they don’t, the issue may well be impossible to analyze correctly without getting into charged topics.) But even if you disagree with my view expressed in this last paragraph, I think your question is adequately answered by the points I made before that.
So, how could one possibly approach this issue while strictly avoiding the mention of anything that’s ideologically charged at least by implication?
How about using an example from the past? A controversy that was ideologically charged at some point, but no longer inflames passions in the present? I’m not sure if there are such examples that would suit your purpose, but it seems worth looking into, if you hadn’t already.
Overall I don’t think we disagree much. We both think whether to bring up political implications is a matter of cost-benefit analysis and we seem to largely agree on what count as costs and what as benefits. I would just caution that we’re probably biased to over-estimate the net benefit of bringing up political implications since many of us feel strongly motivated to spread our favorite political ideas. (If you’re satisfied that you’ve already taken into account such biases, then that’s good enough for me.)
How about using an example from the past? A controversy that was ideologically charged at some point, but no longer inflames passions in the present?
Trouble is, the present system that produces reputable and accredited science and scholarship is a rather novel creation. Things worked very differently as recently as two or three generations ago, and I believe that an accurate general model for assessing its soundness on various issues necessarily has to incorporate judgments about some contemporary polarized and charged topics, which have no historical precedent that would be safely remote from present-day controversies. As Constant wrote in another reply to your above comment, modern science is so deeply intertwined with the modern system of government that it’s impossible to accurately analyze one without asking any questions about the other.
And to emphasize this important point again, I believe that coming up with such a model is a matter of supreme importance to anyone who wants to have correct views on almost any topic outside of one’s own narrow areas of expertise. Our society is historically unique in that we have these vast institutions whose mission is to produce and publish accurate insight on all imaginable topics, and for anyone intellectually curious, the skill of assessing the quality of their output is as important as recognizing edible from poisonous fruit for a forager.
I would just caution that we’re probably biased to over-estimate the net benefit of bringing up political implications since many of us feel strongly motivated to spread our favorite political ideas.
That is surely a valid concern, and I probably display this bias myself at least occasionally. Like most biases, however, it also has its mirror image, i.e. the bias to avoid questions for fear of stirring up controversy, which one should also watch for.
This is not only because excessive caution means avoiding topics that would in fact be worth pursuing, but also because of a more subtle problem. Namely, the set of all questions relevant for a topic may include some safe and innocent ones alongside other more polarizing and charged ones. Deciding to include only the former into one’s assessment and ignoring the latter for fear of controversy may in fact fatally bias one’s final conclusions. I have seen instances of posts and articles on LW that, in my opinion, suffer from this exact problem.
As far as I know, nobody cares what LessWrong commenters think about political issues. LessWrong should concentrate on less crowded topics where it potentially has actual influence, like AI risks.
Do you (pl.) think it would be valuable to have a discussion topic on whether political discussion could be fruitful (possibly with links to relevant discussions, etc.)?
(Not to say “take it elsewhere”, but rather, “should we have this discussion somewhere it’ll be easier to keep track of”.)
It’s still imperfect, but can’t people try a little harder?
When will we be rational enough to talk about politics (or subjects with political implications)? I am skeptical that any of the justifications for not talking about politics will ever change. Right now, we have a bunch of intelligent, rationalist people who have read at a least a smattering of Eliezer’s writings, yet who have differing experiences and perspectives on certain subjects, with a lot of inferential distance in between. We have veteran community members, and we have new members. In a few years, we will have exactly the same thing, and people will still be saying that politics is the “mind-killer.”
I have to wonder, if LW isn’t ready to talk about politics now, will we ever be ready (on our current hardware)? I am skeptical that we all can just keep exercising our rationality on non-political subjects, and then one day a bell will go ding, and suddenly a critical mass of us will be rational enough to discuss politics.
You can’t learn to discuss politics rationally merely by studying rationality in the abstract, or studying it when applied to non-political subjects. Rationality applied to politics is a particular skill that must be exercised. Biases will flare up even for intelligent, rationalist people who know better. The only way for LW to become good at discussing politics is to practice and get better.
(And even now, LW is not bad at discussing politics, and there have been many great political discussions here. While many of them have been a bit heated by the standards of LW, they are downright friendly compared to practically anywhere else.)
Unfortunately, the rest of the world doesn’t have the same level of humility about discussing political subjects. Many of the people most capable of discussing politics rationally seem to have the most humility. How long can we afford to have rationalists sit out of politics?
Hang on. Instrumental rationality.
If you want to make political impact, don’t have discussions about politics on blogs; go do something that makes the best use of your skills. Start an organization, work on a campaign, make political issues your profession or a major personal project.
If that doesn’t sound appealing (to me, it doesn’t, but people I admire often do throw themselves into political work) then talking politics is just shooting the shit. Even if you’re very serious and rational about it, it’s pretty much recreation.
I used to really like politics as recreation—it made me feel good—but it has its downsides. One, it can take up a lot of time that you could use to build skills, get work done, or have more intense fun (a night out on the town vs. a night in on the internet.) Two, it can make you dislike people that you’d otherwise like; it screws with personal relationships. Three, there’s something that bothers me morally, a little, about using issues that are serious life-and-death problems for other people as my form of recreation. Four, in some cases, including mine, politics can hurt your personal development in a particular way: I would palliate my sense of not being a good person by reassuring myself that I had the right opinions. Now I’m trying to actually be a better person in practice, and also trying to worry less about imaginary sins; it’s work in progress, of course, but I feel I don’t need my “fix” of righteous anger as much.
This is a personal experience, of course, but I think that it’s worth it for everyone to ask, “Why do I talk politics? Do I want to talk politics?”
“If you want to make political impact, don’t have discussions about politics on blogs; go do something that makes the best use of your skills. Start an organization, work on a campaign, make political issues your profession or a major personal project.”
You omit the most important step, which comes before starting an organization. That’s figuring out what politics this organization should espouse and how it should espouse those politics.
If my views are almost diametrically opposed to Robin Hanson’s, and I have no good reason to think I’m more rational than Robin or otherwise in a better epistemic position, I’m not rationally justified in setting up an organization to espouse my views because I should consider, in that event, that my views have at least a .5 chance of being wrong, probably much higher. The worst think people can do is set up political projects based on ill-considered principles to end up advocating the wrong policies. As long as rational, informed people disagree, one isn’t entitled to a strongly held political position.
What you said might make sense if political debate were strictly about means and there was general agreement on ends. But it is not. And your views on the ends of policy are worth every bit as much as Dr. Hanson’s, however much you worry that his thinking might be better than yours concerning means.
Do you think having LW discuss politics will help save the world? If so, how do you envision it happening?
Just to make sure there is no confusion about who stands where on the issue, I’d like to re-emphasize that I definitely don’t support making politics a prominent item on the discussion agenda of LW. What I am concerned about are topics that are on LW’s discussion agenda as presently defined, but have some implications about political and other charged issues, and the question of whether these should be avoided. (Though of course this is complicated by the fact that the present discussion agenda is somewhat vague and a matter of some disagreement.)
Why do you find it beneficial to bring up implications about political and other charged issues, when discussing topics that are on LW’s discussion agenda?
I can understand it if you’re making some point about improving rationality in general, and the best example to illustrate your point happens to be political, and you judge the benefit of using that example to be worth the cost (e.g., the risk that LW slides down the slippery slope towards politics being prominently debated, and others finding it difficult to respond to your point because they want to avoid contributing to sliding down that slippery slope).
If it’s more like “btw, here are some political implications of the idea I was talking about” then I think we should avoid those.
It could be that by far the main corruptor of rationality, which does by far the most damage however you want to measure it, is the struggle for political power. If that’s the case, then it may be unavoidable to discuss power and therefore politics.
The high point of human rationality is science, but as it happens, the scientific establishment has been so thoroughly dominated by the government (government supports much of academia, government supports much of science through grants, government passes laws which make it difficult to conduct science without official government approval, government controls the dissemination of scientific claims) that corruption of science by politics seems inevitable. If in fact science is corrupt from top to bottom (as it may be), then such corruption is almost certainly almost entirely at the hands of the state, and is therefore almost certainly political. So, if science is thoroughly corrupt, then it is almost certainly virtually impossible to discuss that corruption at all seriously without getting heavily into politics.
The poster child perhaps, but I wouldn’t go as far as to say the high point. :)
Wei_Dai:
I don’t think one should bring up such implications just for the hell of it, when they contribute nothing of substance. I also agree that among otherwise equally useful examples, one should use those that are least distracting and that minimize the danger of dissension. There’s a simple cost-benefit case there, which I don’t dispute. However, it seems to me that many relevant topics are impossible to discuss without bringing up such implications.
Take for example my original post that started this discussion. For anyone who strives to be less wrong about almost anything, one of the absolutely crucial questions is what confidence should be assigned to what the academic mainstream says, and in this regard, I consider the topic of the post extremely relevant for LW. (If you believe otherwise, I would be curious to see the argument why—and note that what I’m arguing now is independent of what you might think about the quality of its content.) Now, I think nobody could dispute that on many topics the academic opinion is biased to some extent due to political and ideological influences, so it’s important to be able to recognize and evaluate such situations. Moreover, as far as I see, this represents a peculiar class of bias that cannot be adequately illustrated and discussed without bringing up some concrete examples of biases due to ideological or political influences. So, how could one possibly approach this issue while strictly avoiding the mention of anything that’s ideologically charged at least by implication?
Yet some people apparently believe that this line of inquiry already goes too far towards dangerous and undesirable topics. If this belief is correct, in the sense that maintaining a high quality of discourse really demands such a severe restriction on permissible topics, then this, in my opinion, decisively defeats the idea of having a forum like LW, under any reasonable interpretation of its mission statement, vague as it is. It effectively implies that people are inherently incapable of rational discourse unless it’s stringently disciplined and focused on a narrow range of topics, the way expert technical forums are. Because this is definitely not the only example of how charged issues will inevitably be arrived upon by people discussing the general problems of sorting out truth from bias and nonsense.
There are also other important points here, on which I’ve already elaborated in my other comments, which all stem from the same fundamental observation, namely that those topics where one needs an extraordinary level of rationality to escape bias and delusion and often exactly those that are commonly a matter of impassioned and polarized opinion. In other words, general skills in rational thinking and overcoming bias are of little use if one will stick to technical topics in which experts already have sophisticated, so to say, application-specific techniques for eliminating bias and nonsense. (Which often work well—one can easily think of brilliant scientists and technical experts with outright delusional opinions outside of their narrow specialties—and when they don’t, the issue may well be impossible to analyze correctly without getting into charged topics.) But even if you disagree with my view expressed in this last paragraph, I think your question is adequately answered by the points I made before that.
How about using an example from the past? A controversy that was ideologically charged at some point, but no longer inflames passions in the present? I’m not sure if there are such examples that would suit your purpose, but it seems worth looking into, if you hadn’t already.
Overall I don’t think we disagree much. We both think whether to bring up political implications is a matter of cost-benefit analysis and we seem to largely agree on what count as costs and what as benefits. I would just caution that we’re probably biased to over-estimate the net benefit of bringing up political implications since many of us feel strongly motivated to spread our favorite political ideas. (If you’re satisfied that you’ve already taken into account such biases, then that’s good enough for me.)
Wei_Dai:
Trouble is, the present system that produces reputable and accredited science and scholarship is a rather novel creation. Things worked very differently as recently as two or three generations ago, and I believe that an accurate general model for assessing its soundness on various issues necessarily has to incorporate judgments about some contemporary polarized and charged topics, which have no historical precedent that would be safely remote from present-day controversies. As Constant wrote in another reply to your above comment, modern science is so deeply intertwined with the modern system of government that it’s impossible to accurately analyze one without asking any questions about the other.
And to emphasize this important point again, I believe that coming up with such a model is a matter of supreme importance to anyone who wants to have correct views on almost any topic outside of one’s own narrow areas of expertise. Our society is historically unique in that we have these vast institutions whose mission is to produce and publish accurate insight on all imaginable topics, and for anyone intellectually curious, the skill of assessing the quality of their output is as important as recognizing edible from poisonous fruit for a forager.
That is surely a valid concern, and I probably display this bias myself at least occasionally. Like most biases, however, it also has its mirror image, i.e. the bias to avoid questions for fear of stirring up controversy, which one should also watch for.
This is not only because excessive caution means avoiding topics that would in fact be worth pursuing, but also because of a more subtle problem. Namely, the set of all questions relevant for a topic may include some safe and innocent ones alongside other more polarizing and charged ones. Deciding to include only the former into one’s assessment and ignoring the latter for fear of controversy may in fact fatally bias one’s final conclusions. I have seen instances of posts and articles on LW that, in my opinion, suffer from this exact problem.
As far as I know, nobody cares what LessWrong commenters think about political issues. LessWrong should concentrate on less crowded topics where it potentially has actual influence, like AI risks.
Do you (pl.) think it would be valuable to have a discussion topic on whether political discussion could be fruitful (possibly with links to relevant discussions, etc.)?
(Not to say “take it elsewhere”, but rather, “should we have this discussion somewhere it’ll be easier to keep track of”.)