Let’s talk about dangerous. Humans are reasonably benign in the situation where they do not have a lot of power or control compared to others. Once you look into unusual cases, people quickly become unaligned with other people, or even with the whole of humanity. Same applies to groups of people who gain power. I am guessing your intention is to try to imitate humans in the situations where they are mostly harmless, and then extrapolate this imitation by ramping up the computational power to make decisions using the same algorithms. If so, I would expect the group of “human-imitating artificial agents” to quickly become a clique that is hostile to the actual humans.
Now, about the hard part. Basic image recognition took 60 years to get anywhere close to human level, and it is still not there (cue a Tesla on autopilot plowing into road-crossing trucks, traffic cones, and other objects an alert human would never miss). Similarly, a lot of other tasks that look easy to humans are hard to put into an algorithm, even with an Alpha Zero neural net and such.
I suspect that instead of imitating human behaviors it would be much more useful to understand it first, and that seems to be one of the directions the AI alignment people are already working on.
Imitating humans is both hard and dangerous.
Let’s talk about dangerous. Humans are reasonably benign in the situation where they do not have a lot of power or control compared to others. Once you look into unusual cases, people quickly become unaligned with other people, or even with the whole of humanity. Same applies to groups of people who gain power. I am guessing your intention is to try to imitate humans in the situations where they are mostly harmless, and then extrapolate this imitation by ramping up the computational power to make decisions using the same algorithms. If so, I would expect the group of “human-imitating artificial agents” to quickly become a clique that is hostile to the actual humans.
Now, about the hard part. Basic image recognition took 60 years to get anywhere close to human level, and it is still not there (cue a Tesla on autopilot plowing into road-crossing trucks, traffic cones, and other objects an alert human would never miss). Similarly, a lot of other tasks that look easy to humans are hard to put into an algorithm, even with an Alpha Zero neural net and such.
I suspect that instead of imitating human behaviors it would be much more useful to understand it first, and that seems to be one of the directions the AI alignment people are already working on.