So, in other words, you automatically downvote anyone who explicitly mentions that they realize they are violating community norms by posting whatever it is they are posting, but feels that the content of their post is worth the probable downvotes? That IS fairly explicitly suppressing dissent, and I have downvoted you for doing so.
I don’t think it is suppression of dissent per se. It is more annoying behavior- it implies caring a lot about the karma system, and it is often not even the case when people say that they will actually get downvoted. If it is worth the probable downvote, then they can, you know, just take the downvote. If they want to point out that a view is unpopular they can just say that explicitly. It is also annoying to people like me, who are vocal about a number of issues that could be controversial here (e.g. criticizing Bayesianism, cryonics,, and whether intelligence explosions would be likely) and get voted up. More often than not, when someone claims they are getting downvoted for having unpopular opinions, they are getting downvoted in practice for having bad arguments or for being uncivil.
There are of course exceptions to this rule, and it is disturbing to note that the exceptions seem to be coming more common (see for example, this exchange where two comments are made with about the same quality of argument and about the same degree of uncivility- (“I’m starting to hate that you’ve become a fixture here.” v. “idiot”—but one of the comments is at +10 and the other is at −7.) Even presuming that there’s a real disagreement in quality or correctness of the arguments made, this suggests that uncivil remarks are tolerated more when people agree with the rest of the claim being made. That’s problematic. And this exchange was part of what prompted me to earlier suggest that we should be concerned if AGI risk might be becoming a mindkiller here. But even given that, issues like this seem not at all common.
Overall, if one needs to make a claim about one is going to be downvoted, one might even be correct, but it will often not be for the reasons one thinks it is.
More often than not, when someone claims they are getting downvoted for having unpopular opinions, they are getting downvoted in practice for having bad arguments or for being uncivil.
I don’t think it’s so much ‘caring a lot about the karma system’ per se, so much as the more general case of ‘caring about the approval and/or disapproval of one’s peers’. The former is fairly abstract, but the latter is a fairly deep ancestral motivation.
Like I said before, it’s clearly not much in the way of suppression. That said, given that, barring rare incidents of actual moderation, it is the only ‘suppression’ that occurs here, and since there is a view among various circles that there there is, in fact, suppression of dissent, and since people on the site frequently wonder why there are not more dissenting viewpoints here, and look for ways to find more… it is important to look at the issue in great depth, since it’s clearly an issue which is more significant than it seems on the surface.
[P]eople on the site frequently wonder why there are not more dissenting viewpoints here, and look for ways to find more… it is important to look at the issue in great depth, since it’s clearly an issue which is more significant than it seems on the surface.
Exactly right. But a group that claims to be dedicated to rationality loses all credibility when participants not only abstain from considering this question but adamantly resist it. The only upvote you received for your post—which makes this vital point—is mine.
This thread examines HoldenKarnofsky’s charge that SIAI isn’t exemplarily rational. As part of that examination, the broader LW environment on which it relies is germane. That much has been granted by most posters. But when the conversation reaches the touchstone of how the community expresses its approval and disapproval, the comments are declared illegitimate and downvoted (or if the comments are polite and hyper-correct, at least not upvoted).
The group harbors taboos. The following subjects are subject to them: the very possibility of nonevolved AI; karma and the group’s own process generally (an indespensable discussion ); and politics. (I’ve already posted a cite showing how the proscription on politics works, using an example the editors’ unwillingness to promote the post despite receiving almost 800 comments).
These defects in the rational process of LW help sustain Kardofsky’s argument that SIAI is not to be recommended based on the exemplary rationality of its staff and leadership. They are also the leadership of LW, and they have failed by refusing to lead the forum toward understanding the biases in its own process. They have fostered bias by creating the taboo on politics, as though you can rationally understand the world while dogmatically refusing even to consider a big part of it—because it “kills” your mind.
P.S. Thank you for the upvotes where you perceived bias.
...AGI risk might be becoming a mindkiller here...
Nah. If there is a mindkiller then it is the reputation system. Some of the hostility is the result of the overblown ego and attitude of some of its proponents and their general style of discussion. They created an insurmountable fortress that shields them from any criticism:
Troll: If you are so smart and rational, why don’t you fund yourself? Why isn’t your organisation sustainable?
SI/LW: Rationality is only aimed at expected winning.
Troll: But you don’t seem to be winning yet. Have you considered the possibility that your methods are suboptimal? Have you set yourself any goals, that you expect to be better at than less rational folks, to test your rationality?
SI/LW: Rationality is a caeteris paribus predictor of success.
Troll: Okay, but given that you spend a lot of time on refining your rationality, you must believe that it is worth it somehow? What makes you think so then?
SI/LW: We are trying to create a friendly artificial intelligence implement it and run the AI, at which point, if all goes well, we Win. We believe that rationality is very important to achieve that goal.
Troll: I see. But there surely must be some sub-goals that you anticipate to be able to solve and thereby test if your rationality skills are worth the effort?
SI/LW: Many of the problems related to navigating the Singularity have not yet been stated with mathematical precision, and the need for a precise statement of the problem is part of the problem.
Troll: Has there been any success in formalizing one of the problems that you need to solve?
SI/LW: There are some unpublished results that we have had no time to put into a coherent form yet.
Troll: It seems that there is no way for me to judge if it is worth it to read up on your writings on rationality.
SI/LW: If you want to more reliably achieve life success, I recommend inheriting a billion dollars or, failing that, being born+raised to have an excellent work ethic and low akrasia.
Troll: Awesome, I’ll do that next time. But for now, why would I bet on you or even trust that you know what you are talking about?
SI/LW: We spent a lot of time on debiasing techniques and thought long and hard about the relevant issues.
Troll: That seems to be insufficient evidence given the nature of your claims and that you are asking for money.
SI/LW: We make predictions. We make statements of confidence of events that merely sound startling. You are asking for evidence we couldn’t possibly be expected to be able to provide, even given that we are right.
Troll: But what do you anticipate to see if your ideas are right, is there any possibility to update on evidence?
SI/LW: No, once the evidence is available it will be too late.
Troll: But then why would I trust you instead of those experts who tell me that you are wrong?
SI/LW: You will soon learn that your smart friends and experts are not remotely close to the rationality standards of SI/LW, and you will no longer think it anywhere near as plausible that their differing opinion is because they know some incredible secret knowledge you don’t.
Troll: But you have never achieved anything when it comes to AI, why would I trust your reasoning on the topic?
SI/LW: That is magical thinking about prestige. Prestige is not a good indicator of quality.
Troll: You won’t convince me without providing further evidence.
SI/LW: That is a fully general counterargument you can use to discount any conclusion.
I downvote any post that says “I expect I’ll get downvoted for this, but...” or “the fact that I was downvoted proves I’m right!”
I’m fond of downvoting “I dare you to downvote this!”
So, in other words, you automatically downvote anyone who explicitly mentions that they realize they are violating community norms by posting whatever it is they are posting, but feels that the content of their post is worth the probable downvotes? That IS fairly explicitly suppressing dissent, and I have downvoted you for doing so.
I don’t think it is suppression of dissent per se. It is more annoying behavior- it implies caring a lot about the karma system, and it is often not even the case when people say that they will actually get downvoted. If it is worth the probable downvote, then they can, you know, just take the downvote. If they want to point out that a view is unpopular they can just say that explicitly. It is also annoying to people like me, who are vocal about a number of issues that could be controversial here (e.g. criticizing Bayesianism, cryonics,, and whether intelligence explosions would be likely) and get voted up. More often than not, when someone claims they are getting downvoted for having unpopular opinions, they are getting downvoted in practice for having bad arguments or for being uncivil.
There are of course exceptions to this rule, and it is disturbing to note that the exceptions seem to be coming more common (see for example, this exchange where two comments are made with about the same quality of argument and about the same degree of uncivility- (“I’m starting to hate that you’ve become a fixture here.” v. “idiot”—but one of the comments is at +10 and the other is at −7.) Even presuming that there’s a real disagreement in quality or correctness of the arguments made, this suggests that uncivil remarks are tolerated more when people agree with the rest of the claim being made. That’s problematic. And this exchange was part of what prompted me to earlier suggest that we should be concerned if AGI risk might be becoming a mindkiller here. But even given that, issues like this seem not at all common.
Overall, if one needs to make a claim about one is going to be downvoted, one might even be correct, but it will often not be for the reasons one thinks it is.
Bears repeating.
I don’t think it’s so much ‘caring a lot about the karma system’ per se, so much as the more general case of ‘caring about the approval and/or disapproval of one’s peers’. The former is fairly abstract, but the latter is a fairly deep ancestral motivation.
Like I said before, it’s clearly not much in the way of suppression. That said, given that, barring rare incidents of actual moderation, it is the only ‘suppression’ that occurs here, and since there is a view among various circles that there there is, in fact, suppression of dissent, and since people on the site frequently wonder why there are not more dissenting viewpoints here, and look for ways to find more… it is important to look at the issue in great depth, since it’s clearly an issue which is more significant than it seems on the surface.
Exactly right. But a group that claims to be dedicated to rationality loses all credibility when participants not only abstain from considering this question but adamantly resist it. The only upvote you received for your post—which makes this vital point—is mine.
This thread examines HoldenKarnofsky’s charge that SIAI isn’t exemplarily rational. As part of that examination, the broader LW environment on which it relies is germane. That much has been granted by most posters. But when the conversation reaches the touchstone of how the community expresses its approval and disapproval, the comments are declared illegitimate and downvoted (or if the comments are polite and hyper-correct, at least not upvoted).
The group harbors taboos. The following subjects are subject to them: the very possibility of nonevolved AI; karma and the group’s own process generally (an indespensable discussion ); and politics. (I’ve already posted a cite showing how the proscription on politics works, using an example the editors’ unwillingness to promote the post despite receiving almost 800 comments).
These defects in the rational process of LW help sustain Kardofsky’s argument that SIAI is not to be recommended based on the exemplary rationality of its staff and leadership. They are also the leadership of LW, and they have failed by refusing to lead the forum toward understanding the biases in its own process. They have fostered bias by creating the taboo on politics, as though you can rationally understand the world while dogmatically refusing even to consider a big part of it—because it “kills” your mind.
P.S. Thank you for the upvotes where you perceived bias.
Nah. If there is a mindkiller then it is the reputation system. Some of the hostility is the result of the overblown ego and attitude of some of its proponents and their general style of discussion. They created an insurmountable fortress that shields them from any criticism:
Troll: If you are so smart and rational, why don’t you fund yourself? Why isn’t your organisation sustainable?
SI/LW: Rationality is only aimed at expected winning.
Troll: But you don’t seem to be winning yet. Have you considered the possibility that your methods are suboptimal? Have you set yourself any goals, that you expect to be better at than less rational folks, to test your rationality?
SI/LW: Rationality is a caeteris paribus predictor of success.
Troll: Okay, but given that you spend a lot of time on refining your rationality, you must believe that it is worth it somehow? What makes you think so then?
SI/LW: We are trying to create a friendly artificial intelligence implement it and run the AI, at which point, if all goes well, we Win. We believe that rationality is very important to achieve that goal.
Troll: I see. But there surely must be some sub-goals that you anticipate to be able to solve and thereby test if your rationality skills are worth the effort?
SI/LW: Many of the problems related to navigating the Singularity have not yet been stated with mathematical precision, and the need for a precise statement of the problem is part of the problem.
Troll: Has there been any success in formalizing one of the problems that you need to solve?
SI/LW: There are some unpublished results that we have had no time to put into a coherent form yet.
Troll: It seems that there is no way for me to judge if it is worth it to read up on your writings on rationality.
SI/LW: If you want to more reliably achieve life success, I recommend inheriting a billion dollars or, failing that, being born+raised to have an excellent work ethic and low akrasia.
Troll: Awesome, I’ll do that next time. But for now, why would I bet on you or even trust that you know what you are talking about?
SI/LW: We spent a lot of time on debiasing techniques and thought long and hard about the relevant issues.
Troll: That seems to be insufficient evidence given the nature of your claims and that you are asking for money.
SI/LW: We make predictions. We make statements of confidence of events that merely sound startling. You are asking for evidence we couldn’t possibly be expected to be able to provide, even given that we are right.
Troll: But what do you anticipate to see if your ideas are right, is there any possibility to update on evidence?
SI/LW: No, once the evidence is available it will be too late.
Troll: But then why would I trust you instead of those experts who tell me that you are wrong?
SI/LW: You will soon learn that your smart friends and experts are not remotely close to the rationality standards of SI/LW, and you will no longer think it anywhere near as plausible that their differing opinion is because they know some incredible secret knowledge you don’t.
Troll: But you have never achieved anything when it comes to AI, why would I trust your reasoning on the topic?
SI/LW: That is magical thinking about prestige. Prestige is not a good indicator of quality.
Troll: You won’t convince me without providing further evidence.
SI/LW: That is a fully general counterargument you can use to discount any conclusion.
The last exchange was hilarious. This is parody, right?
Downvoted for downvoting downvoting of downvoting of downvoting.
If you do the same to this comment, we can enter a stable loop!