At a certain point, she had learned that hands are good for holding things, but she would only grasp a toy if you literally put it in her hand.
Newborn babies do in fact have a grip-reflex: this does not need to be learnt initially. Put anything in the palm of a newborn and they will grasp it. I think there is a tendancy for this reflex to be lost prior to re-learning how to grasp things consciously.
Niceness in humans has three possible explanations:
Kin altruisim (basically the explanation given above)- in the ancestral environment, humans were likely to be closely related to most of the people they interacted with, giving them genetic “incentive” to be at least somewhat nice. This obviously doesn’t help in getting a “nice” AGI- it won’t share genetic material with us and won’t share a gene-replication goal anyway.
Reciprocal altruism- humans are social creatures, tuned to detect cheating and ostratice non-nice people. This isn’t totally irrelevant- there is a chance a somewhat dangerous AI may have use for humans in achieving its goals, but basically, if the AI is worried that we might decide it’s not nice and turn it off or not listen to it, then we didn’t have that big a problem in the first place. We’re worried about AGIs sufficiently powerful that they can trivially outwit or overpower humans, so I don’t think this helps us much.
Group selection. This is a bit controversial and probably least important of the three. At any rate, it obviously doesn’t help with an AGI.
So in conclusion, human niceness is no reason to expect an AGI to be nice, unfortunately.