[Question] Why is there an alignment problem?

There is an unspoken presumption behind the alignment problem.

The unspoken presumption is the following: if we build an AI that has a sound, robust epistemology and leave it to observe the world via digital information, that is insufficient for creating a morally good AI.

In other words, knowing the Truth is not a sufficient condition for moral goodness.

What is the basis for this presumption?

No answers.
No comments.