I guess you are more optimistic than me about humanity. :) I hope you are right!
Good point about the warning shots leading to common knowledge thing. I am pessimistic that mere argumentation and awareness-raising will be able to achieve an effect that large, but combined with a warning shot it might.
But I am skeptical that we’ll get sufficiently severe warning shots. I think that by the time AGI gets smart enough to cause serious damage, it’ll also be smart enough to guess that humans would punish it for doing so, and that it would be better off biding its time.
I guess you are more optimistic than me about humanity. :) I hope you are right!
Out of the two people I’ve talked to who considered building AGI an important goal of theirs, one said “It’s morally good for AGI to increase complexity in the universe,” and the other said, “Trust me, I’m prepared to walk over bodies to build this thing.”
Probably those weren’t representative, but this “2 in 2” experience does make me skeptical about “1 in 100″ figure.
(And those strange motivations I encountered weren’t even factoring in doing the wrong thing by accident – which seems even more common/likely to me.)
I think some people are temperamentally incapable of being appropriately cynical about the way things are, so I find it hard to decide if non-pessimistic AGI researchers (of which there are admittedly many within EA) happen to be like that, or whether they accurately judge that people at the frontier of AGI research are unusually sane and cautious.
I guess you are more optimistic than me about humanity. :) I hope you are right!
Good point about the warning shots leading to common knowledge thing. I am pessimistic that mere argumentation and awareness-raising will be able to achieve an effect that large, but combined with a warning shot it might.
But I am skeptical that we’ll get sufficiently severe warning shots. I think that by the time AGI gets smart enough to cause serious damage, it’ll also be smart enough to guess that humans would punish it for doing so, and that it would be better off biding its time.
Out of the two people I’ve talked to who considered building AGI an important goal of theirs, one said “It’s morally good for AGI to increase complexity in the universe,” and the other said, “Trust me, I’m prepared to walk over bodies to build this thing.”
Probably those weren’t representative, but this “2 in 2” experience does make me skeptical about “1 in 100″ figure.
(And those strange motivations I encountered weren’t even factoring in doing the wrong thing by accident – which seems even more common/likely to me.)
I think some people are temperamentally incapable of being appropriately cynical about the way things are, so I find it hard to decide if non-pessimistic AGI researchers (of which there are admittedly many within EA) happen to be like that, or whether they accurately judge that people at the frontier of AGI research are unusually sane and cautious.