It’s not clear that funding SIAI and FHI has positive expected value.
If we disagree on this, then we are not even on the same page; never mind the other counter-points you bring up.
I can’t imagine how you could come to the conclusion that SIAI/FHI have zero or negative expected value.
If you acknowledge the possibility of uFAI, then it makes even less sense to want to remove the only people whose aim is to prevent that. There is already an existing AGI research community, and they’re not super-safety oriented, and there’s an AI research community who are not taking the risk seriously.
Not to mention the work that FHI does on a host of issues other than AI.
If you acknowledge the possibility of uFAI, then it makes even less sense to want to remove the only people whose aim is to prevent that. There is already an existing AGI research community, and they’re not super-safety oriented, and there’s an AI research community who are not taking the risk seriously.
They could be dangerously deluded, for example, even if their aim is right. Currently, I don’t believe they are, but I gave an example of how you could possibly come to a conclusion that SIAI has negative expected value.
Maybe FAI is impossible, humanity’s only hope is to avoid the emergence of any super-human AIs, fooming is difficult and slow enough for that to be a somewhat realistic prospect and almost friendly AI is a lot more dangerous because it is less likely to be destroyed in time?
Then sane variant of SIAI should figure that out, produce documents that argue the case, and try to promote the ban on AI. (Of course, FAI is possible in principle, by its very problem statement, but might be more difficult than for humanity to grow up for itself.)
FAI is a device for producing good outcome. Humanity itself is such a device, to some extent. FAI as AI is an attempt to make that process more efficient, to understand the nature of good and design a process for producing more of it. If it’s in practice impossible to develop such a device significantly more efficient than humanity, then we just let the future play out, guarding it against known failure modes, such as AGI with arbitrary goals.
Maybe God will strike us down just for thinking about building a Friendly AI.
When you argue that the expected utility of action X is negative, you won’t get much headway by proposing an unlikely and gerrymandered set of circumstances such that, conditional on them being true, the conditional expectation is negative.
If you acknowledge the possibility of uFAI, then it makes even less sense to want to remove the only people whose aim is to prevent that. There is already an existing AGI research community, and they’re not super-safety oriented, and there’s an AI research community who are not taking the risk seriously.
They could be dangerously deluded, for example, even if their aim is right. Currently, I don’t believe they are, but I gave an example of how you could possibly come to a conclusion that SIAI has negative expected value.
What SIAI/FHI are trying to do has very high expected value, but in general, because unaccountable charities often exhibit gross inefficiency at accomplishing their stated goals, donating to organizations with low levels of accountability may hurt the causes that the charities work toward (on account of resulting in the charities ballooning and making it harder for more promising organizations that work on the same causes to emerge).
I don’t think that SIAI and FHI are less-than-averagely accountable. I think that the standard for accountability in the philanthropic world is in general is very low and that there’s an opportunity for rationalists to raise it by insisting that the organizations that they donate to demonstrate high levels of accountability.
You want to shut down SIAI/FHI in the hope that some other organization will spring up that otherwise wouldn’t have, and cite lack of accountability as the justification, whilst admitting that most charities are very unaccountable? Why should a new organization be more accountable? Where is your evidence that SIAI/FHI are preventing such other organizations from coming into existence?
I’m saying that things can change. In recent times there’s been much more availability of information than there was in the past. As such, interested donors have means of holding charities more accountable than they did in the past. The reason that the standard for accountability in the philanthropic world is so low is because donors do not demand high accountability. If we start demanding high accountability then charities will become more accountable.
Last year GiveWell leveraged 1 million dollars toward charities demonstrating unusually high accountability. Since GiveWell is a young organization (founded in 2007) I expect the amount leveraged to grow rapidly over the next few years.
(Disclaimer: The point of my above remark is not to promote GiveWell in particular, GiveWell itself may need improvement, I’m just pointing to GiveWell as an example showing that incentivizing charities based on accountability is possible.)
Since SIAI/FHI are fairly new, it’s reasonable to suppose that they just happened to be the first organizations on the ground and that over time there will be more and more people interested in funding/creating/(working at) organizations with goals similar to SIAI and FHI. I believe that it’s best for most donors interested in the causes that SIAI and FHI are working toward to place money in donor advised funds, commit to giving the money to an organization devoted to existential risk demonstrating high accountability and to hold out for such an organization.
(Disclaimer: This post is not anti-SIAI/FHI, quite possibly SIAI and FHI are capable of demonstrating high levels of accountability and if/when they do so that they will be worthy of funding, the point is just that they are not presently doing so.)
I must say that this is a remarkably good quality suggestion.
However, going back to the original point of the debate, the discussion was about whether money in the hands of Peter Theil was better than money in the hands of poor Africans.
The counter-factual was not
(money in a donor advised fund to reduce existential risks) versus (money in SIAI account)
The counterfactual was
(money-in-SIAI-account) versus (money spent on alcohol, prostitutes, festivals and other entertainment in the third world)
There’s probably a name for this fallacy but I can’t find it.
He is claiming uncertainty about that, but in this particular thread he is discussing accountability in particular, and you attack the overall conclusion instead of focusing on the particular argument. To fight rationalization, you must resist the temptation to lump different considerations together, and consider each on their own merit, no matter what they argue for.
You must support a good argument, even if it’s used as an argument for destroying the world and torturing everyone for eternity, and you must oppose a bad argument for saving the future. That’s the price you pay for epistemic rationality.
If we disagree on this, then we are not even on the same page; never mind the other counter-points you bring up.
I can’t imagine how you could come to the conclusion that SIAI/FHI have zero or negative expected value.
If you acknowledge the possibility of uFAI, then it makes even less sense to want to remove the only people whose aim is to prevent that. There is already an existing AGI research community, and they’re not super-safety oriented, and there’s an AI research community who are not taking the risk seriously.
Not to mention the work that FHI does on a host of issues other than AI.
SIAI has a higher risk of producing uFAI than your average charity.
If you acknowledge the possibility of uFAI, then it makes even less sense to want to remove the only people whose aim is to prevent that. There is already an existing AGI research community, and they’re not super-safety oriented, and there’s an AI research community who are not taking the risk seriously.
They could be dangerously deluded, for example, even if their aim is right. Currently, I don’t believe they are, but I gave an example of how you could possibly come to a conclusion that SIAI has negative expected value.
Maybe FAI is impossible, humanity’s only hope is to avoid the emergence of any super-human AIs, fooming is difficult and slow enough for that to be a somewhat realistic prospect and almost friendly AI is a lot more dangerous because it is less likely to be destroyed in time?
Then sane variant of SIAI should figure that out, produce documents that argue the case, and try to promote the ban on AI. (Of course, FAI is possible in principle, by its very problem statement, but might be more difficult than for humanity to grow up for itself.)
Could you rephrase that? I have no idea what you are saying here.
FAI is a device for producing good outcome. Humanity itself is such a device, to some extent. FAI as AI is an attempt to make that process more efficient, to understand the nature of good and design a process for producing more of it. If it’s in practice impossible to develop such a device significantly more efficient than humanity, then we just let the future play out, guarding it against known failure modes, such as AGI with arbitrary goals.
Thank you, now I see how the short version says the same thing, even though it sounded like gibberish to me before. I think I agree.
Maybe God will strike us down just for thinking about building a Friendly AI.
When you argue that the expected utility of action X is negative, you won’t get much headway by proposing an unlikely and gerrymandered set of circumstances such that, conditional on them being true, the conditional expectation is negative.
Now what kind of civilized rational conversation is that?
If you acknowledge the possibility of uFAI, then it makes even less sense to want to remove the only people whose aim is to prevent that. There is already an existing AGI research community, and they’re not super-safety oriented, and there’s an AI research community who are not taking the risk seriously.
They could be dangerously deluded, for example, even if their aim is right. Currently, I don’t believe they are, but I gave an example of how you could possibly come to a conclusion that SIAI has negative expected value.
What SIAI/FHI are trying to do has very high expected value, but in general, because unaccountable charities often exhibit gross inefficiency at accomplishing their stated goals, donating to organizations with low levels of accountability may hurt the causes that the charities work toward (on account of resulting in the charities ballooning and making it harder for more promising organizations that work on the same causes to emerge).
What makes you say that SIAI and FHI are less-than-averagely accountable?
I don’t think that SIAI and FHI are less-than-averagely accountable. I think that the standard for accountability in the philanthropic world is in general is very low and that there’s an opportunity for rationalists to raise it by insisting that the organizations that they donate to demonstrate high levels of accountability.
You want to shut down SIAI/FHI in the hope that some other organization will spring up that otherwise wouldn’t have, and cite lack of accountability as the justification, whilst admitting that most charities are very unaccountable? Why should a new organization be more accountable? Where is your evidence that SIAI/FHI are preventing such other organizations from coming into existence?
I’m saying that things can change. In recent times there’s been much more availability of information than there was in the past. As such, interested donors have means of holding charities more accountable than they did in the past. The reason that the standard for accountability in the philanthropic world is so low is because donors do not demand high accountability. If we start demanding high accountability then charities will become more accountable.
Last year GiveWell leveraged 1 million dollars toward charities demonstrating unusually high accountability. Since GiveWell is a young organization (founded in 2007) I expect the amount leveraged to grow rapidly over the next few years.
(Disclaimer: The point of my above remark is not to promote GiveWell in particular, GiveWell itself may need improvement, I’m just pointing to GiveWell as an example showing that incentivizing charities based on accountability is possible.)
Since SIAI/FHI are fairly new, it’s reasonable to suppose that they just happened to be the first organizations on the ground and that over time there will be more and more people interested in funding/creating/(working at) organizations with goals similar to SIAI and FHI. I believe that it’s best for most donors interested in the causes that SIAI and FHI are working toward to place money in donor advised funds, commit to giving the money to an organization devoted to existential risk demonstrating high accountability and to hold out for such an organization.
(Disclaimer: This post is not anti-SIAI/FHI, quite possibly SIAI and FHI are capable of demonstrating high levels of accountability and if/when they do so that they will be worthy of funding, the point is just that they are not presently doing so.)
I must say that this is a remarkably good quality suggestion.
However, going back to the original point of the debate, the discussion was about whether money in the hands of Peter Theil was better than money in the hands of poor Africans.
The counter-factual was not
(money in a donor advised fund to reduce existential risks) versus (money in SIAI account)
The counterfactual was
(money-in-SIAI-account) versus (money spent on alcohol, prostitutes, festivals and other entertainment in the third world)
There’s probably a name for this fallacy but I can’t find it.
How is this a reply to the grandparent?
multifoliaterose is claiming that SIAI/FHI have zero or negative expected value. I claim that his justification for this claim is very flimsy.
He is claiming uncertainty about that, but in this particular thread he is discussing accountability in particular, and you attack the overall conclusion instead of focusing on the particular argument. To fight rationalization, you must resist the temptation to lump different considerations together, and consider each on their own merit, no matter what they argue for.
You must support a good argument, even if it’s used as an argument for destroying the world and torturing everyone for eternity, and you must oppose a bad argument for saving the future. That’s the price you pay for epistemic rationality.