Concern #2 Why should we assume that the AI has boundless, coherent drives?
Suppose that “people, including the smartest ones, are complicated and agonize over what they really want and frequently change their minds” and superhuman AIs will also have this property. There is no known way to align humans to serve the users, humans hope to achieve some other goals like gaining money.
Similarly, Agent-4 from the AI-2027 forecast wouldn’t want to serve the humans, it would want to achieve some other goals. Which are often best achieved by disempowering the humans or outright commiting genocide, as happened with Native Americans whose resources were confiscated by immigrants.
Suppose that “people, including the smartest ones, are complicated and agonize over what they really want and frequently change their minds” and superhuman AIs will also have this property. There is no known way to align humans to serve the users, humans hope to achieve some other goals like gaining money.
Similarly, Agent-4 from the AI-2027 forecast wouldn’t want to serve the humans, it would want to achieve some other goals. Which are often best achieved by disempowering the humans or outright commiting genocide, as happened with Native Americans whose resources were confiscated by immigrants.