I am personally convinced (am a one-time donor myself), but the optimal charity argument in favour of Friendly AI research and development (which will be fully developed in this paper) is something I can use with my friends. They are pretty much the practical type and will definitely respond to wanting more bang for their buck and where their marginal rupee of charity should go.
There are inferential gaps. And when me, a known sci-fi fan presents the argument, I get all sorts of looks. If I have a peer reviewed paper to show them, that would work nicely in my favour.
I think it is precisely to that effect that this paper is aimed. Lets see when the paper comes out, lets see how persuasive it is.
Edited for formatting
I think I need to clarify here.
I am personally convinced (am a one-time donor myself), but the optimal charity argument in favour of Friendly AI research and development (which will be fully developed in this paper) is something I can use with my friends. They are pretty much the practical type and will definitely respond to wanting more bang for their buck and where their marginal rupee of charity should go.
There are inferential gaps. And when me, a known sci-fi fan presents the argument, I get all sorts of looks. If I have a peer reviewed paper to show them, that would work nicely in my favour.
Sounds like a good idea to me.