What about literally the AI 2027 story which does involve superintelligence and Scott thinks doesn’t sound “unnecessarily dramatic”. I think AI 2027 seems much more intuitively plausible to me and it seems less “sci-fi” in this sense. (I’m not saying that “less sci-fi” is much evidence it’s more likely to be true.)
I think if the AI 2027 had more details, they would look fairly similar to the ones in the Sable story. (I think the Sable story substitutes in more superpersuasion, vs military takeover via bioweapons. I think if you spelled out the details of that, it’d sound approximately as outlandish (less reliant on new tech but triggering more people to say “really? people would buy that?”. The stories otherwise seems pretty similar to me.)
I also think the AI 2027 is sort of “the earlier failure” version of the Sable story. AI 2027 is (I think?) basically a story where we hand over a lot of power of our own accord, without the AI needing to persuade us of anything, because we think we’re in a race with China and we just want a lot of economic benefit.
The IABI story is specifically trying to highlight “okay, but would it still be able to do that if we didn’t just hand it power?”, and it does need to take more steps to win in that case. (instead of inventing bioweapons to kill people, it’s probably instead inventing biomedical stuff and other cool new tech that is helpful because it’s a straightforwardly valuable, that’s the whole reason we gave it power in the first place. If you spelled out those details, it’d also seem more sci-fi-y).
It might be that the AI 2027 story is more likely because it happens first / more easily. But it’s necessary to argue the thesis of the book to tell a story with more obstacles, to highlight how the AI would overcome that. I agree that does make it more dramatic.
Both stories end with “and then it fully upgrades it’s cognitiion and invents dyson spheres and goes off conquering the universe”, which is pretty sci-fi-y.
I think if the AI 2027 had more details, they would look fairly similar to the ones in the Sable story. (I think the Sable story substitutes in more superpersuasion, vs military takeover via bioweapons. I think if you spelled out the details of that, it’d sound approximately as outlandish (less reliant on new tech but triggering more people to say “really? people would buy that?”. The stories otherwise seems pretty similar to me.)
I also think the AI 2027 is sort of “the earlier failure” version of the Sable story. AI 2027 is (I think?) basically a story where we hand over a lot of power of our own accord, without the AI needing to persuade us of anything, because we think we’re in a race with China and we just want a lot of economic benefit.
The IABI story is specifically trying to highlight “okay, but would it still be able to do that if we didn’t just hand it power?”, and it does need to take more steps to win in that case. (instead of inventing bioweapons to kill people, it’s probably instead inventing biomedical stuff and other cool new tech that is helpful because it’s a straightforwardly valuable, that’s the whole reason we gave it power in the first place. If you spelled out those details, it’d also seem more sci-fi-y).
It might be that the AI 2027 story is more likely because it happens first / more easily. But it’s necessary to argue the thesis of the book to tell a story with more obstacles, to highlight how the AI would overcome that. I agree that does make it more dramatic.
Both stories end with “and then it fully upgrades it’s cognitiion and invents dyson spheres and goes off conquering the universe”, which is pretty sci-fi-y.