Then consider; Sam asks a question. Predictor knows that an answer of “yes” will result in the development of Clippy, and subsequently in turning Earth into paperclips, causing the destruction of humanity, within the next ten thousand years; while an answer of “no” will result in a wonderful future where everyone is happy and disease is eradicated and all Good Things happen. In both cases, the prediction will be correct.
If Predictor doesn’t care about that answer, then I would not define Predictor as a Friendly AI.
Then consider; Sam asks a question. Predictor knows that an answer of “yes” will result in the development of Clippy, and subsequently in turning Earth into paperclips, causing the destruction of humanity, within the next ten thousand years; while an answer of “no” will result in a wonderful future where everyone is happy and disease is eradicated and all Good Things happen. In both cases, the prediction will be correct.
If Predictor doesn’t care about that answer, then I would not define Predictor as a Friendly AI.
Absolutely agreed; neither would I. More generally, I don’t think I would consider any Oracle AI as Friendly.