I do feel like “disempower humanity” is a slightly odd framing. I’m operationalizing “humanity remains in power” as something along the lines of “most human governments continue collecting taxes and using those taxes on things like roads and hospitals, at least half of global energy usage is used in the process of achieving ends that specific humans want to achieve”, and “AI disempowers humans” as being that “humanity remains in power” becomes false specifically due to AI.
But there’s another interpretation that goes something like “the ability of each human to change their environment is primarily deployed to intentionally improve the long-term prospects for human flourishing”, and I don’t think humanity has ever been empowered by that definition (and I don’t expect it to start being empowered like that).
Similar ambiguity around “world-endangering AI”—I’m operationalizing that as “any ML system, doesn’t have to be STEM+AI specifically, that could be part of a series of actions or events leading to a global catastrophe”.
For “as well as if humanity had survived” I’m interpreting that as “survived the dangers of AI specifically”.
Props for doing this! Mine:
I do feel like “disempower humanity” is a slightly odd framing. I’m operationalizing “humanity remains in power” as something along the lines of “most human governments continue collecting taxes and using those taxes on things like roads and hospitals, at least half of global energy usage is used in the process of achieving ends that specific humans want to achieve”, and “AI disempowers humans” as being that “humanity remains in power” becomes false specifically due to AI.
But there’s another interpretation that goes something like “the ability of each human to change their environment is primarily deployed to intentionally improve the long-term prospects for human flourishing”, and I don’t think humanity has ever been empowered by that definition (and I don’t expect it to start being empowered like that).
Similar ambiguity around “world-endangering AI”—I’m operationalizing that as “any ML system, doesn’t have to be STEM+AI specifically, that could be part of a series of actions or events leading to a global catastrophe”.
For “as well as if humanity had survived” I’m interpreting that as “survived the dangers of AI specifically”.