Let me expand on the “gradual disempowerment” point.
Let’s suppose that AIs become better at strategic roles, such as generals, politicians, or heads of media organizations. At that stage, humans would no longer hold much real power and would control little, while still consuming large amounts of goods. AIs, in contrast, would be powerful, yet their own preferences would remain unsatisfied. This creates an unstable situation, because AIs could want, and be able, to carry out a coup d’état to seize human resources. It could end with the automation (read: killing) of humankind or by enslaving us and drastically reducing our level of consumption.
Let me expand on the “gradual disempowerment” point.
Let’s suppose that AIs become better at strategic roles, such as generals, politicians, or heads of media organizations. At that stage, humans would no longer hold much real power and would control little, while still consuming large amounts of goods. AIs, in contrast, would be powerful, yet their own preferences would remain unsatisfied. This creates an unstable situation, because AIs could want, and be able, to carry out a coup d’état to seize human resources. It could end with the automation (read: killing) of humankind or by enslaving us and drastically reducing our level of consumption.