And I probably should defer to their judgement on this, as they certainly know more than me about the SIAI’s work and what it could do with more money.
I was simply saying that in my estimation, expected utility would recommend that they splurge on Tr-Ro lottery tickets—but I’m still happy that they don’t.
(Just in case my estimation is relevant: I feel the SIAI has a decent chance of moving the world towards an AI that is non-deadly, useful, and doesn’t constrain humanity too much. With a lot more money, I think they could implement an AI that is fun heaven on earth. Expected utility is positive, but the increased risk of us all dying horribly doesn’t make it worthwhile).
And I probably should defer to their judgement on this, as they certainly know more than me about the SIAI’s work and what it could do with more money.
I was simply saying that in my estimation, expected utility would recommend that they splurge on Tr-Ro lottery tickets—but I’m still happy that they don’t.
(Just in case my estimation is relevant: I feel the SIAI has a decent chance of moving the world towards an AI that is non-deadly, useful, and doesn’t constrain humanity too much. With a lot more money, I think they could implement an AI that is fun heaven on earth. Expected utility is positive, but the increased risk of us all dying horribly doesn’t make it worthwhile).