So far as I can tell, the most plausible way for the universe to be deterministic is something along the lines of “many worlds” where Reality is a vast superposition of what-look-to-us-like-realities, and if the future of AI is determined what that means is more like “15% of the future has AI destroying all human value, 10% has AI ushering in a utopia for humans, 20% has it producing a mundane dystopia where all the power and wealth is in a few not-very-benevolent hands, 20% has it improving the world in mundane ways, and 35% has it fizzling out and never making much more change than it already has done” than like “it’s already determined that AI will/won’t kill us all”.
(For the avoidance of doubt, those percentages are not serious attempts at estimating the probabilities. Maybe some of them are more like 0.01% or 99.99%.)
So far as I can tell, the most plausible way for the universe to be deterministic is something along the lines of “many worlds” where Reality is a vast superposition of what-look-to-us-like-realities, and if the future of AI is determined what that means is more like “15% of the future has AI destroying all human value, 10% has AI ushering in a utopia for humans, 20% has it producing a mundane dystopia where all the power and wealth is in a few not-very-benevolent hands, 20% has it improving the world in mundane ways, and 35% has it fizzling out and never making much more change than it already has done” than like “it’s already determined that AI will/won’t kill us all”.
(For the avoidance of doubt, those percentages are not serious attempts at estimating the probabilities. Maybe some of them are more like 0.01% or 99.99%.)