One ideal I have never abandoned and never considered abandoning is that if you disagree with a final conclusion, you ought to be able to exhibit a particular premise or reasoning step that you disagree with. Michael Vassar views this as a fundamental divide that separates sanitykind from Muggles; with Tyler Cowen, for example, rejecting cryonics but not feeling obligated to reject any particular premise of Hanson’s. Perhaps we should call ourselves the Modusponenstsukai.
It’s usually much harder to find a specific flaw in an argument than it is to see that there is probably something wrong with the conclusion. For example, I probably won’t be able to spot the specific flaw in most proposed designs for a perpetual motion machine, but I can still conclude that it won’t work as advertised!
I read “ought to be able to” not as “you’re not allowed to reject the conclusion without rejecting a premise” so much as “you ought to be able to, so when you find you’re not able to, it should bother you; you have learned that there’s a key failing in your understanding of that area.”
I agree and while reading Eliezer’s comment I mentally added in something like “or if you cant then you explicitly model your confusion as being a limitation in your current understanding and so lower your confidence in the related suspect reasoning appropriately—ideally until your confusion can be resolved and your curiosity satisfied” as a footnote.
The probability of us being wiped out by badly done AI is at least at 20%. I agree. The assumption of risks from AI is by itself reasonable. But I am skeptical of making complex predictions that are based on that assumption. I am skpetical of calculating the expected utility of mitigating risks from AI according to the utility associated with its logical implications.
I’ll readily concede that my exact species extinction numbers were made up. But does it really matter? Two hundred million years from now, the children’s children’s children of humanity in their galaxy-civilizations, are unlikely to look back and say, “You know, in retrospect, it really would have been worth not colonizing the Herculus supercluster if only we could have saved 80% of species instead of 20%”. I don’t think they’ll spend much time fretting about it at all, really. It is really incredibly hard to make the consequentialist utilitarian case here, as opposed to the warm-fuzzies case.
I don’t disagree that friendly AI research is currently a better option for charitable giving than charities concerned with environmental problems. Yet I have a hard time to accept that discounting the extinction of most species on the basis of the expected utility of colonizing the Herculus supercluster is sensible.
If you want to convince people like Holden Karnofsky and John Baez then you have to show that risks from AI are more likely than they believe and that contributing to SI can make a difference. If you just argue in terms of logical implications then they will continue to frame SI in terms of Pascal’s mugging.
...you ought to be able to exhibit a particular premise or reasoning step that you disagree with.
I can’t. I can only voice my discomfort. And according to your posts on the Lifespan Dilemma and Pascal’s mugging you share that discomfort, yet you are also unable to pinpoint a certain step that you disagree with.
Michael Vassar views this as a fundamental divide that separates sanitykind from Muggles; with Tyler Cowen, for example, rejecting cryonics but not feeling obligated to reject any particular premise of Hanson’s.
If there is an argument that relies on many premises I can reject the conclusion, i.e., assign it a low probability while accepting, i.e., assigning high probability to, each individual premise.
One ideal I have never abandoned and never considered abandoning is that if you disagree with a final conclusion, you ought to be able to exhibit a particular premise or reasoning step that you disagree with. Michael Vassar views this as a fundamental divide that separates sanitykind from Muggles; with Tyler Cowen, for example, rejecting cryonics but not feeling obligated to reject any particular premise of Hanson’s. Perhaps we should call ourselves the Modusponenstsukai.
It’s usually much harder to find a specific flaw in an argument than it is to see that there is probably something wrong with the conclusion. For example, I probably won’t be able to spot the specific flaw in most proposed designs for a perpetual motion machine, but I can still conclude that it won’t work as advertised!
I read “ought to be able to” not as “you’re not allowed to reject the conclusion without rejecting a premise” so much as “you ought to be able to, so when you find you’re not able to, it should bother you; you have learned that there’s a key failing in your understanding of that area.”
I agree and while reading Eliezer’s comment I mentally added in something like “or if you cant then you explicitly model your confusion as being a limitation in your current understanding and so lower your confidence in the related suspect reasoning appropriately—ideally until your confusion can be resolved and your curiosity satisfied” as a footnote.
Just for fun: Classic Mathematical Fallacies—can you spot the step that’s wrong?
The probability of us being wiped out by badly done AI is at least at 20%. I agree. The assumption of risks from AI is by itself reasonable. But I am skeptical of making complex predictions that are based on that assumption. I am skpetical of calculating the expected utility of mitigating risks from AI according to the utility associated with its logical implications.
Take your following comment:
I don’t disagree that friendly AI research is currently a better option for charitable giving than charities concerned with environmental problems. Yet I have a hard time to accept that discounting the extinction of most species on the basis of the expected utility of colonizing the Herculus supercluster is sensible.
If you want to convince people like Holden Karnofsky and John Baez then you have to show that risks from AI are more likely than they believe and that contributing to SI can make a difference. If you just argue in terms of logical implications then they will continue to frame SI in terms of Pascal’s mugging.
I can’t. I can only voice my discomfort. And according to your posts on the Lifespan Dilemma and Pascal’s mugging you share that discomfort, yet you are also unable to pinpoint a certain step that you disagree with.
If there is an argument that relies on many premises I can reject the conclusion, i.e., assign it a low probability while accepting, i.e., assigning high probability to, each individual premise.
One man’s modus ponens is another man’s modus tollens.