I don’t know if you were committing this particular error internally, but, at the least, the sentence is liable to cause the error externally, so: Large consequences != prior improbability.
What I meant when I described the claim (hereafter “C”) that SI is better suited to convert dollars to existential risk mitigation than any other charitable organization as “extraordinary” was that priors for C are low (C is false for most organizations, and therefore likely to be false for SI absent additional evidence about SI), not that C has large consequences (although that is true as well).
Yes, this might be a failing of using the wrong reference class (charitable organizations in general) to establish one’s priors., as you suggest. The fact remains that when trying to solicit broad public support, or support from an organization like GiveWell, it’s likely that SI will be evaluated within the reference class of other charities. If using that reference class leads to improperly low priors for C, it seems SI has a few strategic choices:
1) Convince GiveWell, and donors in general, that SI is importantly unlike other charities, and should not be evaluated as though it were like them—in other words, win at reference class tennis.
2) Ignore donors in general and concentrate its attention primarily on potential donors who already use the correct reference class.
3) Provide enough evidence to convince even someone who starts out with improperly low priors drawn from the incorrect reference class of “SI is a charity” to update to a sufficiently high estimate of C that donating money to SI seems reasonable (in practice, I think this is what has happened and is happening with anthropogenic climate change).
4) Look for alternate sources of funding besides charitable donations.
One way to approach strategy #1 is the one you use here—shift the conversation from whether or not SI can actually spend money effectively to mitigate existential risk to whether or not uFAI/FAI by 2025 (or some other near-mode threshold) is plausible.
That’s not a bad tactic; it works pretty well in general.
Your statement was that it was an extraordinary claim that SIAI provided x-risk reduction—why then would SIAI be compared to most other charities, which don’t provide x-risk reduction, and don’t claim to provide x-risk reduction? The AI-risk item was there for comparison of standards, as was global warming; i.e., if you claim that you doubt X because of Y, but Y implies doubting Z, but you don’t doubt Z, you should question whether you’re really doubting X because of Y.
why then would SIAI be compared to most other charities, which don’t provide x-risk reduction, and don’t claim to provide x-risk reduction?
Are you trying to argue that it isn’t in fact being compared to other charities? (Specifically, by GiveWell?) Or merely that if it is, those doing such comparison are mistaken?
If you’re arguing the former… huh. I will admit, in that case, that almost everything I’ve said in this thread is irrelevant to your point, and I’ve completely failed to follow your argument. If that’s the case, let me know and I’ll back up and re-read your argument in that context.
If you’re arguing the latter, well, I’m happy to grant that, but I’m not sure how relevant it is to Luke’s goal (which I take to be encouraging Holden to endorse SI as a charitable donation).
If SI wants to argue that GiveWell’s expertise with evaluating other charities isn’t relevant to evaluating SI because SI ought not be compared to other charities in the first place, that’s a coherent argument (though it raises the question of why GiveWell ever got involved in evaluating SI to begin with… wasn’t that at SI’s request? Maybe not. Or maybe it was, but SI now realizes that was a mistake. I don’t know.)
But as far as I can tell that’s not the argument SI is making in Luke’s reply to Holden. (Perhaps it ought to be? I don’t know.)
I worry that this conversation is starting to turn around points of phrasing, but… I think it’s worth separating the ideas that you ought to be doing x-risk reduction and that SIAI is the most efficient way to do it, which is why I myself agreed strongly with your own, original phrasing, that the key claim is providing the most efficient x-risk reduction. If someone’s comparing SIAI to Rare Diseases in Cute Puppies or anything else that isn’t about x-risk, I’ll leave that debate to someone else—I don’t think I have much comparative advantage in talking about it.
Further, it seems to me that Holden is implicitly comparing SI to other charitable-giving opportunities when he provides GW’s evaluation of SI, rather than comparing SI to other x-risk-reduction opportunities. I tentatively infer, from the fact that you consider responding to such a comparison something you should leave to others but you’re participating in a discussion of how SI ought to respond to Holden, that you don’t agree that Holden is engaging in such a comparison.
If you’re right, then I don’t know what Holden is doing, and I probably don’t have a clue how Luke ought to reply to Holden.
Holden is comparing SI to other giving opportunities, not just to giving opportunities that may reduce x-risk. That’s not a part of the discussion Eliezer feels he should contribute to, though. I tried to address it in the first two sections of my post above, and then in part 3 I talked about why both FHI and SI contribute unique and important value to the x-risk reduction front.
In other words: I tried to explain that for many people, x-risk is Super Duper Important, and so for those people, what matters is which charities among those reducing x-risk they should support. And then I went on to talk about SI’s value for x-risk reduction in particular.
Much of the debate over x-risk as a giving opportunity in general has to do with Holden’s earlier posts about expected value estimates, and SI’s post on that subject (written by Steven Kaas) is still under development.
What I meant when I described the claim (hereafter “C”) that SI is better suited to convert dollars to existential risk mitigation than any other charitable organization as “extraordinary” was that priors for C are low (C is false for most organizations, and therefore likely to be false for SI absent additional evidence about SI), not that C has large consequences (although that is true as well).
Yes, this might be a failing of using the wrong reference class (charitable organizations in general) to establish one’s priors., as you suggest. The fact remains that when trying to solicit broad public support, or support from an organization like GiveWell, it’s likely that SI will be evaluated within the reference class of other charities. If using that reference class leads to improperly low priors for C, it seems SI has a few strategic choices:
1) Convince GiveWell, and donors in general, that SI is importantly unlike other charities, and should not be evaluated as though it were like them—in other words, win at reference class tennis.
2) Ignore donors in general and concentrate its attention primarily on potential donors who already use the correct reference class.
3) Provide enough evidence to convince even someone who starts out with improperly low priors drawn from the incorrect reference class of “SI is a charity” to update to a sufficiently high estimate of C that donating money to SI seems reasonable (in practice, I think this is what has happened and is happening with anthropogenic climate change).
4) Look for alternate sources of funding besides charitable donations.
One way to approach strategy #1 is the one you use here—shift the conversation from whether or not SI can actually spend money effectively to mitigate existential risk to whether or not uFAI/FAI by 2025 (or some other near-mode threshold) is plausible.
That’s not a bad tactic; it works pretty well in general.
Your statement was that it was an extraordinary claim that SIAI provided x-risk reduction—why then would SIAI be compared to most other charities, which don’t provide x-risk reduction, and don’t claim to provide x-risk reduction? The AI-risk item was there for comparison of standards, as was global warming; i.e., if you claim that you doubt X because of Y, but Y implies doubting Z, but you don’t doubt Z, you should question whether you’re really doubting X because of Y.
Are you trying to argue that it isn’t in fact being compared to other charities? (Specifically, by GiveWell?) Or merely that if it is, those doing such comparison are mistaken?
If you’re arguing the former… huh. I will admit, in that case, that almost everything I’ve said in this thread is irrelevant to your point, and I’ve completely failed to follow your argument. If that’s the case, let me know and I’ll back up and re-read your argument in that context.
If you’re arguing the latter, well, I’m happy to grant that, but I’m not sure how relevant it is to Luke’s goal (which I take to be encouraging Holden to endorse SI as a charitable donation).
If SI wants to argue that GiveWell’s expertise with evaluating other charities isn’t relevant to evaluating SI because SI ought not be compared to other charities in the first place, that’s a coherent argument (though it raises the question of why GiveWell ever got involved in evaluating SI to begin with… wasn’t that at SI’s request? Maybe not. Or maybe it was, but SI now realizes that was a mistake. I don’t know.)
But as far as I can tell that’s not the argument SI is making in Luke’s reply to Holden. (Perhaps it ought to be? I don’t know.)
I worry that this conversation is starting to turn around points of phrasing, but… I think it’s worth separating the ideas that you ought to be doing x-risk reduction and that SIAI is the most efficient way to do it, which is why I myself agreed strongly with your own, original phrasing, that the key claim is providing the most efficient x-risk reduction. If someone’s comparing SIAI to Rare Diseases in Cute Puppies or anything else that isn’t about x-risk, I’ll leave that debate to someone else—I don’t think I have much comparative advantage in talking about it.
I agree with you on all of those points.
Further, it seems to me that Holden is implicitly comparing SI to other charitable-giving opportunities when he provides GW’s evaluation of SI, rather than comparing SI to other x-risk-reduction opportunities.
I tentatively infer, from the fact that you consider responding to such a comparison something you should leave to others but you’re participating in a discussion of how SI ought to respond to Holden, that you don’t agree that Holden is engaging in such a comparison.
If you’re right, then I don’t know what Holden is doing, and I probably don’t have a clue how Luke ought to reply to Holden.
Holden is comparing SI to other giving opportunities, not just to giving opportunities that may reduce x-risk. That’s not a part of the discussion Eliezer feels he should contribute to, though. I tried to address it in the first two sections of my post above, and then in part 3 I talked about why both FHI and SI contribute unique and important value to the x-risk reduction front.
In other words: I tried to explain that for many people, x-risk is Super Duper Important, and so for those people, what matters is which charities among those reducing x-risk they should support. And then I went on to talk about SI’s value for x-risk reduction in particular.
Much of the debate over x-risk as a giving opportunity in general has to do with Holden’s earlier posts about expected value estimates, and SI’s post on that subject (written by Steven Kaas) is still under development.