Robin Hanson has been listed as the other major “intelligent/competent” critic of SIAI. That he criticises what seems to be the keystone of Holden’s argument should be cause for concern for Holden. (after all, if “even a chance” is good enough, then all the other criticisms melt away).
This would be a much more serious criticism of SIAI if Holden and Hanson could come to agreement on what exactly the problem with SIAI is, and if Holden could sort out the problems with these two supporting posts*
(*of course they won’t do that without substantial revision of one or both of their positions because Hanson is on the same page as the rest of SIAI with regard to expected utility, see On Fudge Factors. Hanson’s disagreement with SIAI is a different one; approximately that Hanson thinks ems first is likely and that a singleton is both bad and unlikely, and Hanson’s axiology is significantly unintuitive to the extent that he is not really on the same page as most people with regard to what counts as a good or bad outcome)
Robin Hanson has been listed as the other major “intelligent/competent” critic of SIAI. That he criticises what seems to be the keystone of Holden’s argument should be cause for concern for Holden.
So, I stipulate that Robin, whom Eliezer considers the only other major “intelligent/competent” critic of SI, disagrees with this aspect of Holden’s position. I also stipulate that this aspect is the keystone of Holden’s argument, and without it all the rest of it is irrelevant. (I’m not sure either of those statements is actually true, but they’re beside my point here.)
I do not understand why these stipulated facts should be a significant cause for concern for Holden, who may not consider Eliezer’s endorsement of what is and isn’t legitimate criticism of SI particularly significant evidence of anything important.
It’s not. Apparently I somehow replied to the wrong post… It’s actually aimed at sufferer’s comment you were replying to.
I don’t suppose there’s a convenient way to move it? I don’t think retracting and re-posting would clean it up sufficiently, in fact that seems messier.
I suspect that Holden would also consider Robin Hanson a competent critic. This is because Robin is smart, knowledgeable and prestigiously accredited.
But your comment has alerted me to the fact that even if Hanson comes out as a flat-earther tomorrow the supporting posts are still weak.
The issue of the two most credible critics of SIAI disagreeing with each other is logically independent of the issue of Holden’s wobbly argument against the utilitarian argument for SIAI. Many thanks.
But if there’s even a chance …
Holden cites two posts (Why We Can’t Take Expected Value Estimates Literally and Maximizing Cost-effectiveness via Critical Inquiry). They are supposed to support the argument that small or very small changes to the probability of an existential risk event occurring are not worth caring about or donating money towards.
I think that these posts both have serious problems (see the comments, esp Carl Shulman’s). In particular Why We Can’t Take Expected Value Estimates Literally was heavily criticised by Robin Hanson in On Fudge Factors.
Robin Hanson has been listed as the other major “intelligent/competent” critic of SIAI. That he criticises what seems to be the keystone of Holden’s argument should be cause for concern for Holden. (after all, if “even a chance” is good enough, then all the other criticisms melt away).
This would be a much more serious criticism of SIAI if Holden and Hanson could come to agreement on what exactly the problem with SIAI is, and if Holden could sort out the problems with these two supporting posts*
(*of course they won’t do that without substantial revision of one or both of their positions because Hanson is on the same page as the rest of SIAI with regard to expected utility, see On Fudge Factors. Hanson’s disagreement with SIAI is a different one; approximately that Hanson thinks ems first is likely and that a singleton is both bad and unlikely, and Hanson’s axiology is significantly unintuitive to the extent that he is not really on the same page as most people with regard to what counts as a good or bad outcome)
So, I stipulate that Robin, whom Eliezer considers the only other major “intelligent/competent” critic of SI, disagrees with this aspect of Holden’s position. I also stipulate that this aspect is the keystone of Holden’s argument, and without it all the rest of it is irrelevant. (I’m not sure either of those statements is actually true, but they’re beside my point here.)
I do not understand why these stipulated facts should be a significant cause for concern for Holden, who may not consider Eliezer’s endorsement of what is and isn’t legitimate criticism of SI particularly significant evidence of anything important.
Can you expand on your reasoning here?
Not to the degree that SI could be increasing the existential risk, a point Holden also makes. “Even a chance” swings both ways.
I am completely lost by how this is a response to anything I said.
It’s not. Apparently I somehow replied to the wrong post… It’s actually aimed at sufferer’s comment you were replying to.
I don’t suppose there’s a convenient way to move it? I don’t think retracting and re-posting would clean it up sufficiently, in fact that seems messier.
Ah! That makes sense. I know of no way to move it… sorry.
I suspect that Holden would also consider Robin Hanson a competent critic. This is because Robin is smart, knowledgeable and prestigiously accredited.
But your comment has alerted me to the fact that even if Hanson comes out as a flat-earther tomorrow the supporting posts are still weak.
The issue of the two most credible critics of SIAI disagreeing with each other is logically independent of the issue of Holden’s wobbly argument against the utilitarian argument for SIAI. Many thanks.
I’m not sure what you mean by
As Holden and Eliezer both explicitly state, SIAI itself rejects the “but there’s still a chance” argument.
It all depends on how small that small chance is. Pascal mugging is typically done with probabilities that are exponentially small, e.g. 10^-10 or so.
But what about if Holden is going to not recommend SIAI for donations when there’s a 1% or 0.1% chance of it making that big difference.