This is the super mild version of the future. Basically what Zvi has termed “AI fizzle”. AI advancing not much beyond where it currently is, staying a well-controlled tool in its operators hands.
Yes, even that can indeed lead to strange and uncomfortable futures for humanity.
But to assume this future is likely requires answering some questions?
Why does the AI not get smarter and vastly faster than humans? Becoming to us as we are to sloths. Getting beyond our capacity to control.
Why does a world with such AIs in it not fall into devastating nuclear war as non-leading nuclear powers find themselves on the brink of being technologically and economically crushed?
Why does noone let a survival-seeking self-improvement-capable AI loose on the internet? Did ChaosGPT not show that some fool will likely do this as a joke?
If the AI is fully AGI, why wouldn’t someone send robotic probes out of easy reach of human civilization, perhaps into space or under the crust of the earth (perhaps in an attempt to harvest resources)? If such a thing began, how would we stop it?
What if an enclave of AI decided to declare independence from humanity and conquer territory here on Earth, using threats of releasing bioweapons as a way to hold off nuclear attack?
I dunno. Seems like the economic defeat is a narrow sliver of possibility space to even worry about.
If you don’t have answers, try asking o1 and see what it says.
Indeed I don’t have answers, but only because this is indeed a sort of “AI mid” future, assuming some remnant of the status quo remains intact, whether because AI does not advance as far as anticipated as quickly as anticipated (a position I no longer hold), or because a future AI model deliberately chooses to maintain an artificial status quo bubble, a “human reserve” relatively indistinguishable from a more gradually-progressing future for psychosocial reasons (which is plausible, but not certain).
Generally though, it’s the epistemological barrier at play, since the intent was to provide more of a grounded economist-focus look at the effects of universal task automation. Now I have mulled on technism a great deal recently, but since I have no economics background, the intention on my end wasn’t to ask o1 but to use this as a launchpad, something that an even more impressive AI system could handle. Deep Research is probably that model, so I intend on returning to this with a follow up to see whether any of this is coherent or if this really is as schizophrenic and sophistic as “the means of production owns the means of production” historically would have sounded.
This is the super mild version of the future. Basically what Zvi has termed “AI fizzle”. AI advancing not much beyond where it currently is, staying a well-controlled tool in its operators hands. Yes, even that can indeed lead to strange and uncomfortable futures for humanity.
But to assume this future is likely requires answering some questions?
Why does the AI not get smarter and vastly faster than humans? Becoming to us as we are to sloths. Getting beyond our capacity to control.
Why does a world with such AIs in it not fall into devastating nuclear war as non-leading nuclear powers find themselves on the brink of being technologically and economically crushed?
Why does noone let a survival-seeking self-improvement-capable AI loose on the internet? Did ChaosGPT not show that some fool will likely do this as a joke?
If the AI is fully AGI, why wouldn’t someone send robotic probes out of easy reach of human civilization, perhaps into space or under the crust of the earth (perhaps in an attempt to harvest resources)? If such a thing began, how would we stop it?
What if an enclave of AI decided to declare independence from humanity and conquer territory here on Earth, using threats of releasing bioweapons as a way to hold off nuclear attack?
I dunno. Seems like the economic defeat is a narrow sliver of possibility space to even worry about.
If you don’t have answers, try asking o1 and see what it says.
Indeed I don’t have answers, but only because this is indeed a sort of “AI mid” future, assuming some remnant of the status quo remains intact, whether because AI does not advance as far as anticipated as quickly as anticipated (a position I no longer hold), or because a future AI model deliberately chooses to maintain an artificial status quo bubble, a “human reserve” relatively indistinguishable from a more gradually-progressing future for psychosocial reasons (which is plausible, but not certain).
Generally though, it’s the epistemological barrier at play, since the intent was to provide more of a grounded economist-focus look at the effects of universal task automation. Now I have mulled on technism a great deal recently, but since I have no economics background, the intention on my end wasn’t to ask o1 but to use this as a launchpad, something that an even more impressive AI system could handle. Deep Research is probably that model, so I intend on returning to this with a follow up to see whether any of this is coherent or if this really is as schizophrenic and sophistic as “the means of production owns the means of production” historically would have sounded.