As I said initially, a lot depends on what we mean by “benevolent.” If we mean reliably doing more good for humans than harm, on average, then I agree that humans are benevolent (or “semibenevolent,” if you prefer) and suspect that building a benevolent (or semibenevolent) AGI is about as hard as building a smart one.
I agree that having a positive view of human nature has advantages over an equally accurate negative view.
As I said initially, a lot depends on what we mean by “benevolent.” If we mean reliably doing more good for humans than harm, on average, then I agree that humans are benevolent (or “semibenevolent,” if you prefer) and suspect that building a benevolent (or semibenevolent) AGI is about as hard as building a smart one.
I agree that having a positive view of human nature has advantages over an equally accurate negative view.