I just realized there’s another possible explanation: discussions/arguments between “useful commenters” usually stop getting upvoted after a certain point (probably because the disagreements are usually over peripheral issues that don’t interest a huge number of readers), whereas arguments against “hopeless cases” seem good for unlimited karma (probably because you’re making central points that everyone can understand). Perhaps I and others have been unconsciously letting this affect our behavior?
There are enough important differences of opinion between useful commenters about what we all should do on the grand scale that I would expect it to be at least possible, somehow, to create relatively high expected value by hashing these disagreements out. If the discussion is over peripheral issues that don’t much affect the answer to such big questions, maybe we’re going about it the wrong way.
I see. I had hoped to raise some debate by posting Some Thoughts on Singularity Strategies, but few FAI supporters responded, and none from SIAI. I have the feeling (and also some evidence) that there aren’t many people, aside from Eliezer, who are very gung-ho on trying to build an FAI directly.
I did have a private chat with Eliezer recently where I tried to find out why we disagree over FAI, and it seems to mostly come down to different estimates on how hard the philosophical problems involved are compared to his ability to correctly solve them.
I did have a private chat with Eliezer recently where I tried to find out why we disagree over FAI, and it seems to mostly come down to different estimates on how hard the philosophical problems involved are compared to his ability to correctly solve them.
That’s good to know. Was the disagreement more about how hard the philosophical problems are, or about how good Eliezer is at solving philosophical problems, or some of both?
I just realized there’s another possible explanation: discussions/arguments between “useful commenters” usually stop getting upvoted after a certain point (probably because the disagreements are usually over peripheral issues that don’t interest a huge number of readers), whereas arguments against “hopeless cases” seem good for unlimited karma (probably because you’re making central points that everyone can understand). Perhaps I and others have been unconsciously letting this affect our behavior?
There are enough important differences of opinion between useful commenters about what we all should do on the grand scale that I would expect it to be at least possible, somehow, to create relatively high expected value by hashing these disagreements out. If the discussion is over peripheral issues that don’t much affect the answer to such big questions, maybe we’re going about it the wrong way.
I see. I had hoped to raise some debate by posting Some Thoughts on Singularity Strategies, but few FAI supporters responded, and none from SIAI. I have the feeling (and also some evidence) that there aren’t many people, aside from Eliezer, who are very gung-ho on trying to build an FAI directly.
I did have a private chat with Eliezer recently where I tried to find out why we disagree over FAI, and it seems to mostly come down to different estimates on how hard the philosophical problems involved are compared to his ability to correctly solve them.
That’s good to know. Was the disagreement more about how hard the philosophical problems are, or about how good Eliezer is at solving philosophical problems, or some of both?
Some of both.