I guess you are more optimistic than me about humanity. :) I hope you are right!
Out of the two people I’ve talked to who considered building AGI an important goal of theirs, one said “It’s morally good for AGI to increase complexity in the universe,” and the other said, “Trust me, I’m prepared to walk over bodies to build this thing.”
Probably those weren’t representative, but this “2 in 2” experience does make me skeptical about “1 in 100″ figure.
(And those strange motivations I encountered weren’t even factoring in doing the wrong thing by accident – which seems even more common/likely to me.)
I think some people are temperamentally incapable of being appropriately cynical about the way things are, so I find it hard to decide if non-pessimistic AGI researchers (of which there are admittedly many within EA) happen to be like that, or whether they accurately judge that people at the frontier of AGI research are unusually sane and cautious.
Out of the two people I’ve talked to who considered building AGI an important goal of theirs, one said “It’s morally good for AGI to increase complexity in the universe,” and the other said, “Trust me, I’m prepared to walk over bodies to build this thing.”
Probably those weren’t representative, but this “2 in 2” experience does make me skeptical about “1 in 100″ figure.
(And those strange motivations I encountered weren’t even factoring in doing the wrong thing by accident – which seems even more common/likely to me.)
I think some people are temperamentally incapable of being appropriately cynical about the way things are, so I find it hard to decide if non-pessimistic AGI researchers (of which there are admittedly many within EA) happen to be like that, or whether they accurately judge that people at the frontier of AGI research are unusually sane and cautious.