Obvious guess: Eli^H^H^H Michael Vassar doesn’t think SIAI’s budget shows increasing marginal returns. (Nor, for what it’s worth, can I imagine why it would.)
That one’s easy: successfully saving the world requires more money than they have now, and if they don’t reach that goal, it makes little difference how much money they raise. Eliezer believes most non-winning outcomes are pretty much equivalent:
Mostly, the meddling dabblers won’t trap you in With Folded Hands or The Metamorphosis of Prime Intellect. Mostly, they’re just gonna kill ya.
And I probably should defer to their judgement on this, as they certainly know more than me about the SIAI’s work and what it could do with more money.
I was simply saying that in my estimation, expected utility would recommend that they splurge on Tr-Ro lottery tickets—but I’m still happy that they don’t.
(Just in case my estimation is relevant: I feel the SIAI has a decent chance of moving the world towards an AI that is non-deadly, useful, and doesn’t constrain humanity too much. With a lot more money, I think they could implement an AI that is fun heaven on earth. Expected utility is positive, but the increased risk of us all dying horribly doesn’t make it worthwhile).
Obvious guess: Eli^H^H^H Michael Vassar doesn’t think SIAI’s budget shows increasing marginal returns. (Nor, for what it’s worth, can I imagine why it would.)
That one’s easy: successfully saving the world requires more money than they have now, and if they don’t reach that goal, it makes little difference how much money they raise. Eliezer believes most non-winning outcomes are pretty much equivalent:
(from here)
But cf. also:
And I probably should defer to their judgement on this, as they certainly know more than me about the SIAI’s work and what it could do with more money.
I was simply saying that in my estimation, expected utility would recommend that they splurge on Tr-Ro lottery tickets—but I’m still happy that they don’t.
(Just in case my estimation is relevant: I feel the SIAI has a decent chance of moving the world towards an AI that is non-deadly, useful, and doesn’t constrain humanity too much. With a lot more money, I think they could implement an AI that is fun heaven on earth. Expected utility is positive, but the increased risk of us all dying horribly doesn’t make it worthwhile).