Surely a team of engineers capable of developing AGI can be given some guidance in advance so that they are at least
competent enough to instill a set of values as robust as the ones we attempt to instill in children?
The number of actual possibilities of goals is HUGE compared to the relatively small subset of human goals. Humans share the same brain structure and general goal structure, but there’s no reason to expect the first AI to share our neural/goal structure. Innocuous goals like “Prevent Suffering” and “Maxmize Happiness” may not be interpreted and executed the way we wish them to be.
Indeed, gaining superpowers probably would not compromise the AI’s moral code. It only gives it the ability to fully execute the actions dictated by the moral code. Unfortunately, there’s no guarantee that its morals will fall in line with ours.
The number of actual possibilities of goals is HUGE compared to the relatively small subset of human goals. Humans share the same brain structure and general goal structure, but there’s no reason to expect the first AI to share our neural/goal structure. Innocuous goals like “Prevent Suffering” and “Maxmize Happiness” may not be interpreted and executed the way we wish them to be.
Indeed, gaining superpowers probably would not compromise the AI’s moral code. It only gives it the ability to fully execute the actions dictated by the moral code. Unfortunately, there’s no guarantee that its morals will fall in line with ours.
There is no guarantee, therefore we have a lot of work to do!
Here is another candidate for an ethical precept, from the profession of medicine:
“First do no harm.”
The doctor is instructed to begin with this heuristic, to which there are many, many exceptions.