[Question] Term/​Category for AI with Neutral Impact?

Is there any commonly known term on LessWrong to describe an AGI that does not significantly increase or decrease human value? (For instance, an AGI that stops all future attempts to build AGI, but otherwise tries to preserve the course of human history as if it had never existed).

Would such an AI be considered “aligned”? It seems that most discussions of creating aligned AI focus on making the AI share and actively promote human values, but this seems importantly different from what I described.

No comments.