The only additional ingredient is that ai can win against us in the market, thereby incrementally taking over earth against humans. Initially it would be winning against poor people and this would seem fine to rich people, but then there’s no reason to expect it would stop at any point—someone can always make their company win more by making it more ai driven and less dependent on humans—and eventually if it can do every job including fighting wars and trading, it wins economically against us and takes over the land we use to feed ourselves, and even if we try to fight it we just lose a war against it. Then we’re all dead. We don’t yet have the kind of AI that can just replace humans like that, but there’s no reason to believe the next versions of AI can’t be it, and once it can work in the world there’s no reason to believe there’s any job or role that can’t be served more reliably by an ai powered robot. The owner class will initially be pleased by this, but they’d only be wealthy on paper and the robots will end up wealthier than they are and able to buy up all the farmland to build servers.
I wish it would be easier to indicate an exploratory engineering mode/frame, with less detailed engineering connotations. What you describe is a good argument showing that at least this “can happen” in some unusual sense, while abstracting away the details of what else can happen, and what anyone would be motivated to let happen. This is an important form of argument that can be seen as much weaker than it is if it’s perceived as forecasting of what will actually happen.
hmm, I believe myself to be forecasting what will happen in the majority of timelines. can you clarify your point with one or several qualitatively different rephrasings, given that context? Or, how would your version of my point look?
Sure, the point I’m making applies to what is intended to be actual forecasting just as well, if it has this exploratory engineering form. The argument becomes stronger if it’s reframed from forecasting to discussion of technological affordances, if it in fact is a discussion of technological affordances, in a broad sense of “technological” that can include social dynamics.
An exploratory engineering sketch rests on assumptions of what can be done, and what the world chooses to do, to draw conclusions about what happens in that case, it’s a study of affordances, of options. Its validity is not affected if in fact more can be done, or if in fact something else will be done. But validity of a forecast is affected by those things, so forecasting is more difficult and reframing an exploratory engineering sketch as forecasting unnecessarily damages its validity.
In this particular case, I don’t expect this story to play out without superintelligence getting developed early on, which makes the rest of the story stop being a worthwhile thing for the superintelligence to let continue. And conversely, I don’t expect such a story to start developing more than a few months before a superintelligence is likely to be built. But the story itself is good exploratory engineering, it shows that at least this level of economically unstoppable disempowerment is feasible.
The only additional ingredient is that ai can win against us in the market, thereby incrementally taking over earth against humans. Initially it would be winning against poor people and this would seem fine to rich people, but then there’s no reason to expect it would stop at any point—someone can always make their company win more by making it more ai driven and less dependent on humans—and eventually if it can do every job including fighting wars and trading, it wins economically against us and takes over the land we use to feed ourselves, and even if we try to fight it we just lose a war against it. Then we’re all dead. We don’t yet have the kind of AI that can just replace humans like that, but there’s no reason to believe the next versions of AI can’t be it, and once it can work in the world there’s no reason to believe there’s any job or role that can’t be served more reliably by an ai powered robot. The owner class will initially be pleased by this, but they’d only be wealthy on paper and the robots will end up wealthier than they are and able to buy up all the farmland to build servers.
I wish it would be easier to indicate an exploratory engineering mode/frame, with less detailed engineering connotations. What you describe is a good argument showing that at least this “can happen” in some unusual sense, while abstracting away the details of what else can happen, and what anyone would be motivated to let happen. This is an important form of argument that can be seen as much weaker than it is if it’s perceived as forecasting of what will actually happen.
hmm, I believe myself to be forecasting what will happen in the majority of timelines. can you clarify your point with one or several qualitatively different rephrasings, given that context? Or, how would your version of my point look?
Sure, the point I’m making applies to what is intended to be actual forecasting just as well, if it has this exploratory engineering form. The argument becomes stronger if it’s reframed from forecasting to discussion of technological affordances, if it in fact is a discussion of technological affordances, in a broad sense of “technological” that can include social dynamics.
An exploratory engineering sketch rests on assumptions of what can be done, and what the world chooses to do, to draw conclusions about what happens in that case, it’s a study of affordances, of options. Its validity is not affected if in fact more can be done, or if in fact something else will be done. But validity of a forecast is affected by those things, so forecasting is more difficult and reframing an exploratory engineering sketch as forecasting unnecessarily damages its validity.
In this particular case, I don’t expect this story to play out without superintelligence getting developed early on, which makes the rest of the story stop being a worthwhile thing for the superintelligence to let continue. And conversely, I don’t expect such a story to start developing more than a few months before a superintelligence is likely to be built. But the story itself is good exploratory engineering, it shows that at least this level of economically unstoppable disempowerment is feasible.