Humans are semi-benevolent. I believe that if people in general didn’t do more good (behavior which leads towards human survival) than harm, the human race could not have existed as long as it has.
By observation, it’s not a matter of a small minority of people who make things a lot better vs. a majority whose effect is neutral or negative. I’m reasonably sure most people do more good than harm. The good that people do for themselves is included in the calculation.
This doesn’t mean people come anywhere near a theoretical maximum of benevolence. It just means that common behavior which doesn’t cause a problem doesn’t even get noticed.
I don’t know whether realizing this gives some way of applying leverage to get more benevolence, though I’m inclined to think that “build on what you’re doing well” is at least as good as “look at how awful you are”. (For the latter, consider the number of people who believe that if aliens met us, they’d destroy us out of disgust at how we treat each other.)
As I said initially, a lot depends on what we mean by “benevolent.” If we mean reliably doing more good for humans than harm, on average, then I agree that humans are benevolent (or “semibenevolent,” if you prefer) and suspect that building a benevolent (or semibenevolent) AGI is about as hard as building a smart one.
I agree that having a positive view of human nature has advantages over an equally accurate negative view.
Humans are semi-benevolent. I believe that if people in general didn’t do more good (behavior which leads towards human survival) than harm, the human race could not have existed as long as it has.
By observation, it’s not a matter of a small minority of people who make things a lot better vs. a majority whose effect is neutral or negative. I’m reasonably sure most people do more good than harm. The good that people do for themselves is included in the calculation.
This doesn’t mean people come anywhere near a theoretical maximum of benevolence. It just means that common behavior which doesn’t cause a problem doesn’t even get noticed.
I don’t know whether realizing this gives some way of applying leverage to get more benevolence, though I’m inclined to think that “build on what you’re doing well” is at least as good as “look at how awful you are”. (For the latter, consider the number of people who believe that if aliens met us, they’d destroy us out of disgust at how we treat each other.)
As I said initially, a lot depends on what we mean by “benevolent.” If we mean reliably doing more good for humans than harm, on average, then I agree that humans are benevolent (or “semibenevolent,” if you prefer) and suspect that building a benevolent (or semibenevolent) AGI is about as hard as building a smart one.
I agree that having a positive view of human nature has advantages over an equally accurate negative view.