That intuition seems to me to follow from the probably false assumption that if behavior X would, under some circumstances, be utility maximizing, it is also likely to be utility maximizing to fund a non-profit to engage in behavior X. SIAI isn’t a “do what seems to us to maximize expected utility” organization because such vague goals don’t make for a good organizational culture. Organizing and funding research into FAI and research inputs into FAI, plus doing normal non-profit fund-raising and outreach, that’s a feasible non-profit directive.
That intuition seems to me to follow from the probably false assumption that if behavior X would, under some circumstances, be utility maximizing, it is also likely to be utility maximizing to fund a non-profit to engage in behavior X. SIAI isn’t a “do what seems to us to maximize expected utility” organization because such vague goals don’t make for a good organizational culture. Organizing and funding research into FAI and research inputs into FAI, plus doing normal non-profit fund-raising and outreach, that’s a feasible non-profit directive.
It also follows from the assumption that the claims in any comment submitted on August 20, 2011 are true. Yet I do not believe this.
I had, to the best of my ability, considered the specific situation when giving my advice.
Any advice can be dismissed by suggesting it came from a too generalized assumption.
If you thought someone was about to foom an unfriendly AI, you would do something about it, and without waiting to properly update your 501(c) forms.