There is a tendency to downvote articles and commentaries with a political subtext with a remark on how politics is the mind-killer. I completely understand that nobody wants his mind to be killed, however, I disagree on the employed methods.
I don’t think many people are worried about their mind being killed as much as the community being damaged.
I don’t think many people are worried about their mind being killed as much as the community being damaged.
People who are rational enough to not laugh about superhuman intelligences threatening galactic civilizations, who are not angered by a bunch of hardcore atheists, will likely be able to consider arguments rather than being driven away by the political affiliation of some members.
I think the worry is that discussing politics runs the risk of creating tribalism, which would severely damage our ability to discuss the other topics.
How is politics defined with LW? My current understanding is that discussing the goal-system of an FAI can be considered politics. After all an FAI would be ruling, making decisions while running the universe. What difference is there between a health-care debate, or to wage war against some country versus the implementation of CEV over any other goal-system and the possible suppression of any alien civilisation or other potential minds?
If you consider creating a fooming AI a political act, which I think is reasonable given its consequences, then you not only discuss politics on LW but also ask people to contribute to a certain political player who is trying overthrow all governments in favor of a new world order. Sounds crazy, but I don’t see how that differs, except in scale.
You make a good point, so I’ll try to be more clear about why starting a Health-care debate on LW would worry me.
Politics has a tendency to create what I call ‘bad entanglements’. An example would be the fact that in US politics there is a correlation between supporting Health Care and supporting Homosexual Marriage. There is no particular reason for this, I personally can’t see any inference by which one position can be derived from the other. This only happened because both issues were debated by the same people in the environment, and Blue and Green politics turned them into two sides each with their own position on both issues (and many others).
Overall, I would say this entanglement reduces the chance of an good resolution for either issue.
What I don’t want to see is the problem of Friendly AI developing its own bad entanglements. I have already seen this happening to some extent within Friendly, those who believe it is possible are more likely to believe its necessary and those who think its impossible tend to think its unnecessary (not yet a very strong correlation thankfully). This suggests we may not be good enough at rationality to avoid this problem yet.
Since the problem of bad entanglements seems quite difficult to excise, our best strategy is to focus on making sure it doesn’t get started, which can be done by not discussing too many controversial issues in one place. In fact, I would probably suggest that as a general policy for any community that actually wants to resolve the issues it discusses.
It isn’t. But by any definition respecting ordinary language, creating a fooming AI is a very political act. Yet the subject has been discussed many times on LW without anyone’s mind being killed. (Some minor injuries, perhaps ;) ) Thank goodness the LW community doesn’t take its own rules too seriously.
There is no need to start a thread on the public and private sectors’ roles in health care. But if the subject comes up naturally in a discussion on, say, seeking psychological expert help in overcoming some form of irrationality—then let it. Let it be touched on (not displace the original topic) as far as is relevant. Putting up crude firewalls is not worth the price in artificially constrained discussion. The “slippery slope” isn’t—as demonstrated in the response to Mass_Driver’s recent thread.
I don’t think many people are worried about their mind being killed as much as the community being damaged.
People who are rational enough to not laugh about superhuman intelligences threatening galactic civilizations, who are not angered by a bunch of hardcore atheists, will likely be able to consider arguments rather than being driven away by the political affiliation of some members.
I think the worry is that discussing politics runs the risk of creating tribalism, which would severely damage our ability to discuss the other topics.
How is politics defined with LW? My current understanding is that discussing the goal-system of an FAI can be considered politics. After all an FAI would be ruling, making decisions while running the universe. What difference is there between a health-care debate, or to wage war against some country versus the implementation of CEV over any other goal-system and the possible suppression of any alien civilisation or other potential minds?
If you consider creating a fooming AI a political act, which I think is reasonable given its consequences, then you not only discuss politics on LW but also ask people to contribute to a certain political player who is trying overthrow all governments in favor of a new world order. Sounds crazy, but I don’t see how that differs, except in scale.
You make a good point, so I’ll try to be more clear about why starting a Health-care debate on LW would worry me.
Politics has a tendency to create what I call ‘bad entanglements’. An example would be the fact that in US politics there is a correlation between supporting Health Care and supporting Homosexual Marriage. There is no particular reason for this, I personally can’t see any inference by which one position can be derived from the other. This only happened because both issues were debated by the same people in the environment, and Blue and Green politics turned them into two sides each with their own position on both issues (and many others).
Overall, I would say this entanglement reduces the chance of an good resolution for either issue.
What I don’t want to see is the problem of Friendly AI developing its own bad entanglements. I have already seen this happening to some extent within Friendly, those who believe it is possible are more likely to believe its necessary and those who think its impossible tend to think its unnecessary (not yet a very strong correlation thankfully). This suggests we may not be good enough at rationality to avoid this problem yet.
Since the problem of bad entanglements seems quite difficult to excise, our best strategy is to focus on making sure it doesn’t get started, which can be done by not discussing too many controversial issues in one place. In fact, I would probably suggest that as a general policy for any community that actually wants to resolve the issues it discusses.
It isn’t. But by any definition respecting ordinary language, creating a fooming AI is a very political act. Yet the subject has been discussed many times on LW without anyone’s mind being killed. (Some minor injuries, perhaps ;) ) Thank goodness the LW community doesn’t take its own rules too seriously.
There is no need to start a thread on the public and private sectors’ roles in health care. But if the subject comes up naturally in a discussion on, say, seeking psychological expert help in overcoming some form of irrationality—then let it. Let it be touched on (not displace the original topic) as far as is relevant. Putting up crude firewalls is not worth the price in artificially constrained discussion. The “slippery slope” isn’t—as demonstrated in the response to Mass_Driver’s recent thread.