(People seem to be hella downvoting this, and I am kinda confused as to why. I can see not finding it particularly persuasive or interesting. I’m guessing this is just sad tribalism but curious if people have a particular objection I’m missing)
There are some users around who strong-downvote anyone trying to make any arguments on the basis of CEV, and who seem very triggered by the concept. This is sad and has derailed a bunch of conversations in the past. My guess is the same is going on here.
Do you not have the power/tools to stop such behavior from taking effect? This sounds like the exact problem that killed LW 1.0, and which I was lead to believe is now solved.
We have much better tools to detect downvoting of specific users, and unusual voting activity by a specific user, but if a topic only comes up occasionally and the users who vote on that topic also regularly vote on other things, I don’t know of any high-level statistics that would easily detect that, and I think it would have very substantial chilling effects if we were to start policing that kind of behavior.
There probably are technical solutions, but it’s a more tricky kind of problem than what LW 1.0 faced, and we haven’t built them.
I’d be more interested in tools that detected downvotes that occur before people started reading, on the basis of the title—because I’d give even odds that more than half of downvotes on this post were within 1 minute of opening it, on the basis of the title or reacting the the first paragraph—not due to the discussion of CEV.
I was the one who downvoted, and my reasoning for doing this is at a fundamental level, I think a lot of their argument rests on fabrication of options that only appear to work because they ignore the issue of why value disagreement is less tolerable in an AI-controlled future than now.
I have a longer comment below, and @sunwillrise makes a similar point, but a lot of the argument around AI safety having an attitude towards minimizing value conflict makes more sense than the post is giving it credit for, and the mechanisms that allow value disagreements to not blow up into take-over attempts/mass violence relies on certain features of modern society that AGI will break (and there is no talk about how to actually make the vision sustainable):
(People seem to be hella downvoting this, and I am kinda confused as to why. I can see not finding it particularly persuasive or interesting. I’m guessing this is just sad tribalism but curious if people have a particular objection I’m missing)
There are some users around who strong-downvote anyone trying to make any arguments on the basis of CEV, and who seem very triggered by the concept. This is sad and has derailed a bunch of conversations in the past. My guess is the same is going on here.
Do you not have the power/tools to stop such behavior from taking effect? This sounds like the exact problem that killed LW 1.0, and which I was lead to believe is now solved.
We have much better tools to detect downvoting of specific users, and unusual voting activity by a specific user, but if a topic only comes up occasionally and the users who vote on that topic also regularly vote on other things, I don’t know of any high-level statistics that would easily detect that, and I think it would have very substantial chilling effects if we were to start policing that kind of behavior.
There probably are technical solutions, but it’s a more tricky kind of problem than what LW 1.0 faced, and we haven’t built them.
I’d be more interested in tools that detected downvotes that occur before people started reading, on the basis of the title—because I’d give even odds that more than half of downvotes on this post were within 1 minute of opening it, on the basis of the title or reacting the the first paragraph—not due to the discussion of CEV.
I was the one who downvoted, and my reasoning for doing this is at a fundamental level, I think a lot of their argument rests on fabrication of options that only appear to work because they ignore the issue of why value disagreement is less tolerable in an AI-controlled future than now.
I have a longer comment below, and @sunwillrise makes a similar point, but a lot of the argument around AI safety having an attitude towards minimizing value conflict makes more sense than the post is giving it credit for, and the mechanisms that allow value disagreements to not blow up into take-over attempts/mass violence relies on certain features of modern society that AGI will break (and there is no talk about how to actually make the vision sustainable):
https://www.lesswrong.com/posts/iJzDm6h5a2CK9etYZ/a-conservative-vision-for-ai-alignment#eBdRwtZeJqJkKt2hn