AI Notkilleveryoneism has no consensus on most things, but there still is semi-agreement (not just within MIRI) that the alignment problem is due to difficulty aligning the AI’s goals with human goals, rather than difficulty finding the universe’s objective morality to program the AI to follow.
Yes, there is semi-agreement to that (that’s the majority viewpoint in the AI Notkilleveryoneism circles), but the minority viewpoint that a viable approach to AI existential safety has to be non-anthropocentric (and thus not directly related to human goals, while taking human interests into account) is not so infrequent.
Yes, there is semi-agreement to that (that’s the majority viewpoint in the AI Notkilleveryoneism circles), but the minority viewpoint that a viable approach to AI existential safety has to be non-anthropocentric (and thus not directly related to human goals, while taking human interests into account) is not so infrequent.