Overall I’d feel a lot more comfortable if you just said “there’s a huge amount of uncertainty as to when existential risks will strike and which ones will strike, I don’t know whether or not I’m on the right track in focusing on Friendly AI or whether I’m right about when the Singularity will occur, I’m just doing the best that I can.”
This is largely because of the issue that I raise here
I should emphasize that I don’t think that you’d ever knowingly do something that raised existential risk, I think that you’re a kind and noble spirit. But I do think I’m raising a serious issue which you’ve missed.
Overall I’d feel a lot more comfortable if you just said “there’s a huge amount of uncertainty as to when existential risks will strike and which ones will strike, I don’t know whether or not I’m on the right track in focusing on Friendly AI or whether I’m right about when the Singularity will occur, I’m just doing the best that I can.”
This is largely because of the issue that I raise here
I should emphasize that I don’t think that you’d ever knowingly do something that raised existential risk, I think that you’re a kind and noble spirit. But I do think I’m raising a serious issue which you’ve missed.
Edit: See also these comments