It may be unknown, or even unknowable by any real-world agent. It’s still not necessarily undetermined by the universe—I find it pretty likely that the universe is, in fact, deterministic.
Your underlying point is correct, though. Because human behavior is anti-inductive (people change their behavior based on their predictions of others’ predictions), a lot of these kinds of questions are chaotic (in the fractal / James Gleik sense).
So far as I can tell, the most plausible way for the universe to be deterministic is something along the lines of “many worlds” where Reality is a vast superposition of what-look-to-us-like-realities, and if the future of AI is determined what that means is more like “15% of the future has AI destroying all human value, 10% has AI ushering in a utopia for humans, 20% has it producing a mundane dystopia where all the power and wealth is in a few not-very-benevolent hands, 20% has it improving the world in mundane ways, and 35% has it fizzling out and never making much more change than it already has done” than like “it’s already determined that AI will/won’t kill us all”.
(For the avoidance of doubt, those percentages are not serious attempts at estimating the probabilities. Maybe some of them are more like 0.01% or 99.99%.)
It may be unknown, or even unknowable by any real-world agent. It’s still not necessarily undetermined by the universe—I find it pretty likely that the universe is, in fact, deterministic.
Your underlying point is correct, though. Because human behavior is anti-inductive (people change their behavior based on their predictions of others’ predictions), a lot of these kinds of questions are chaotic (in the fractal / James Gleik sense).
So far as I can tell, the most plausible way for the universe to be deterministic is something along the lines of “many worlds” where Reality is a vast superposition of what-look-to-us-like-realities, and if the future of AI is determined what that means is more like “15% of the future has AI destroying all human value, 10% has AI ushering in a utopia for humans, 20% has it producing a mundane dystopia where all the power and wealth is in a few not-very-benevolent hands, 20% has it improving the world in mundane ways, and 35% has it fizzling out and never making much more change than it already has done” than like “it’s already determined that AI will/won’t kill us all”.
(For the avoidance of doubt, those percentages are not serious attempts at estimating the probabilities. Maybe some of them are more like 0.01% or 99.99%.)