You know, I keep hoping that you’d update your evaluation of this community and especially your estimate of how much we’ve already thought about these things, but maybe it’s time for me to update...
Yes. In general, the useful commenters on LessWrong seem to spend too much time arguing with hopeless cases and not enough time arguing with other useful commenters.
I just realized there’s another possible explanation: discussions/arguments between “useful commenters” usually stop getting upvoted after a certain point (probably because the disagreements are usually over peripheral issues that don’t interest a huge number of readers), whereas arguments against “hopeless cases” seem good for unlimited karma (probably because you’re making central points that everyone can understand). Perhaps I and others have been unconsciously letting this affect our behavior?
There are enough important differences of opinion between useful commenters about what we all should do on the grand scale that I would expect it to be at least possible, somehow, to create relatively high expected value by hashing these disagreements out. If the discussion is over peripheral issues that don’t much affect the answer to such big questions, maybe we’re going about it the wrong way.
I see. I had hoped to raise some debate by posting Some Thoughts on Singularity Strategies, but few FAI supporters responded, and none from SIAI. I have the feeling (and also some evidence) that there aren’t many people, aside from Eliezer, who are very gung-ho on trying to build an FAI directly.
I did have a private chat with Eliezer recently where I tried to find out why we disagree over FAI, and it seems to mostly come down to different estimates on how hard the philosophical problems involved are compared to his ability to correctly solve them.
I did have a private chat with Eliezer recently where I tried to find out why we disagree over FAI, and it seems to mostly come down to different estimates on how hard the philosophical problems involved are compared to his ability to correctly solve them.
That’s good to know. Was the disagreement more about how hard the philosophical problems are, or about how good Eliezer is at solving philosophical problems, or some of both?
I’m not sure. Arguing with “hopeless cases” is high risk but high return (if we succeed in bringing in new blood and new insights). Arguing with other “useful commenters” perhaps marginally improves our beliefs and how we approach the problems we’re trying to solve, but much of the time when I disagree with some “useful commenter” I still think both of our approaches ought to be explored so there’s not that much gain from arguing with them. I’d typically state my reasons (just in case I’m making some kind of gross error) and leave it at that if it doesn’t change their mind.
I think good new insights in practice tend to come from old commenters who rethink things one point at a time and not as much from new commenters who start out with an attitude of belligerent dismissal.
much of the time when I disagree with some “useful commenter” I still think both of our approaches ought to be explored so there’s not that much gain from arguing with them
I don’t understand. Doesn’t arguing with them constitute exploring the different approaches?
I think good new insights in practice tend to come from old commenters who rethink things one point at a time and not as much from new commenters who start out with an attitude of belligerent dismissal.
I think it’s good to have some natural contrarians/skeptics around who like to find flaws in whatever ideas they see. I guess I played this role somewhat back in the OB days, but less so now that I’m closer to the “inner circle”. Of course I was more careful to make sure the flaws are real flaws, and not very belligerent...
I don’t understand. Doesn’t arguing with them constitute exploring the different approaches?
Maybe we’re not thinking about the same things. I’m talking about like when cousin_it or Nesov has some decision theory idea that I don’t think is particularly promising, I tend to let them work on it and either reach that conclusion themselves or obtain some undeniable result, instead of trying to talk them out of it and work on my preferred approaches. What kind of arguments are you thinking of?
I suppose I was thinking of arguments more informal than decision theory, and I suppose in the context of such informal arguments, exchanging a lot of small chunks of reasoning seems more useful than it does in the context of building decision theory models.
Yes. In general, the useful commenters on LessWrong seem to spend too much time arguing with hopeless cases and not enough time arguing with other useful commenters.
When I went on house-to-house preaching with other Jehovah’s Witnesses as a child, this was almost exactly what more experienced members told me to do when we encountered people who didn’t seem to understand that we were clearly right and only trying to warn them that the end is nigh.
I couldn’t follow that. Could you say it again in more detail?
What kinds of people would you encounter, and which were you told to spend time proselytizing? Were there many who immediately agreed with you? and was it best to give them lots of time?
Yes. In general, the useful commenters on LessWrong seem to spend too much time arguing with hopeless cases and not enough time arguing with other useful commenters.
I just realized there’s another possible explanation: discussions/arguments between “useful commenters” usually stop getting upvoted after a certain point (probably because the disagreements are usually over peripheral issues that don’t interest a huge number of readers), whereas arguments against “hopeless cases” seem good for unlimited karma (probably because you’re making central points that everyone can understand). Perhaps I and others have been unconsciously letting this affect our behavior?
There are enough important differences of opinion between useful commenters about what we all should do on the grand scale that I would expect it to be at least possible, somehow, to create relatively high expected value by hashing these disagreements out. If the discussion is over peripheral issues that don’t much affect the answer to such big questions, maybe we’re going about it the wrong way.
I see. I had hoped to raise some debate by posting Some Thoughts on Singularity Strategies, but few FAI supporters responded, and none from SIAI. I have the feeling (and also some evidence) that there aren’t many people, aside from Eliezer, who are very gung-ho on trying to build an FAI directly.
I did have a private chat with Eliezer recently where I tried to find out why we disagree over FAI, and it seems to mostly come down to different estimates on how hard the philosophical problems involved are compared to his ability to correctly solve them.
That’s good to know. Was the disagreement more about how hard the philosophical problems are, or about how good Eliezer is at solving philosophical problems, or some of both?
Some of both.
I’m not sure. Arguing with “hopeless cases” is high risk but high return (if we succeed in bringing in new blood and new insights). Arguing with other “useful commenters” perhaps marginally improves our beliefs and how we approach the problems we’re trying to solve, but much of the time when I disagree with some “useful commenter” I still think both of our approaches ought to be explored so there’s not that much gain from arguing with them. I’d typically state my reasons (just in case I’m making some kind of gross error) and leave it at that if it doesn’t change their mind.
I think good new insights in practice tend to come from old commenters who rethink things one point at a time and not as much from new commenters who start out with an attitude of belligerent dismissal.
I don’t understand. Doesn’t arguing with them constitute exploring the different approaches?
I think it’s good to have some natural contrarians/skeptics around who like to find flaws in whatever ideas they see. I guess I played this role somewhat back in the OB days, but less so now that I’m closer to the “inner circle”. Of course I was more careful to make sure the flaws are real flaws, and not very belligerent...
Maybe we’re not thinking about the same things. I’m talking about like when cousin_it or Nesov has some decision theory idea that I don’t think is particularly promising, I tend to let them work on it and either reach that conclusion themselves or obtain some undeniable result, instead of trying to talk them out of it and work on my preferred approaches. What kind of arguments are you thinking of?
I suppose I was thinking of arguments more informal than decision theory, and I suppose in the context of such informal arguments, exchanging a lot of small chunks of reasoning seems more useful than it does in the context of building decision theory models.
When I went on house-to-house preaching with other Jehovah’s Witnesses as a child, this was almost exactly what more experienced members told me to do when we encountered people who didn’t seem to understand that we were clearly right and only trying to warn them that the end is nigh.
I couldn’t follow that. Could you say it again in more detail?
What kinds of people would you encounter, and which were you told to spend time proselytizing? Were there many who immediately agreed with you? and was it best to give them lots of time?