The original topic of this thread is “Why no in-between?” Why should we think that there is no “in between” period where AI is powerful enough that it might be able to kill us and weak enough that we might win the fight?”
This is not a question about whether we can decide not to build ASI, it’s a question about, if we did, what would happen.
Certainly there’s lots of important questions here, and “can we coordinate to just not build the thing?” is one of them, but it’s not what this thread was about.
It just seems to me like the topics are interconnected:
EY argues that there is likely no in-between. He does so specifically to argue that a “wait and see” strategy is not feasible, we can not experiment and hope to gleam further evidence past a certain point, we must act on pure theory because that’s the best possible knowledge we can hope for before things become deadly;
dvd is not convinced of this thinking. Arguably, they’re right—while EY’s argument has weight I would consider it far from certain, and mostly seems built around the assumption of ASI-as-singleton rather than, say, an ecosystem of evolving AIs in competition which may have to worry also about each other and a closing window of opportunity;
if warning shots are possible, a lot of EY’s arguments don’t hold as straightforwardly. It becomes less reasonable to take extreme actions on pure speculation because we can afford—however with risk—to wait for a first sign of experimental evidence that the risk is real before going all in and risking paying the costs for nothing.
This is not irrelevant or unrelated IMO. I still think the risk is large but obviously warning shots would change the scenario and the way we approach and evaluate the risks of superintelligence.
You are importantly sliding from one point to another, and this is not a topic where you can afford to do that. You can’t just tally up the markers that sort of vibe towards “how dangerous is it?” and get an answer about what to do. The arguments are individually true, or false, and what sort of world we live in depends on which specific combination of arguments are true, or false.
If it turns out there is no political will for a shut down or controlled takeoff, then we can’t have a shut down or controlled takeoff. (But that doesn’t change whether AI is likely to FOOM, or whether alignment is easy/hard)
If AI Fooms suddenly, a lot of AI alignment techniques will probably break at once. If things are gradual, smaller things may break 1-2 at a time, and maybe we get warning shots, and this buys us time. But, there’s still the question of what to do with that time.
If alignment is easy, then a reasonable plan is “get everyone to slow down for a couple years so we can do the obvious safety things, just less rushed.” If alignment is hard, that won’t work, you actually need a radically different paradigm of AI development to have any chance of not killing everyone – you may need a lot of time to figure out something new.
if warning shots are possible, a lot of EY’s arguments don’t hold as straightforwardly
None of IABIED’s arguments had to do with “are warning shots possible?”, but even if they did, it is a logical fallacy to say “warning shots are possible, EY arguments arguments are less valid, therefore, this other argument that had nothing to do with warning shots is also invalid.” If you’re doing that kind of sloppy reasoning, then if you get to the warning shot world, if you don’t understand that overwhelmingly powerful superintelligence is qualitatively different from non-overwhelmingly powerful superintelligence, you might think “angle for a 1-2 year slowdown” instead of trying for a longer global moratorium.
(But, repeat, the book doesn’t say anything about whether warning shots)
The original topic of this thread is “Why no in-between?” Why should we think that there is no “in between” period where AI is powerful enough that it might be able to kill us and weak enough that we might win the fight?”
This is not a question about whether we can decide not to build ASI, it’s a question about, if we did, what would happen.
Certainly there’s lots of important questions here, and “can we coordinate to just not build the thing?” is one of them, but it’s not what this thread was about.
It just seems to me like the topics are interconnected:
EY argues that there is likely no in-between. He does so specifically to argue that a “wait and see” strategy is not feasible, we can not experiment and hope to gleam further evidence past a certain point, we must act on pure theory because that’s the best possible knowledge we can hope for before things become deadly;
dvd is not convinced of this thinking. Arguably, they’re right—while EY’s argument has weight I would consider it far from certain, and mostly seems built around the assumption of ASI-as-singleton rather than, say, an ecosystem of evolving AIs in competition which may have to worry also about each other and a closing window of opportunity;
if warning shots are possible, a lot of EY’s arguments don’t hold as straightforwardly. It becomes less reasonable to take extreme actions on pure speculation because we can afford—however with risk—to wait for a first sign of experimental evidence that the risk is real before going all in and risking paying the costs for nothing.
This is not irrelevant or unrelated IMO. I still think the risk is large but obviously warning shots would change the scenario and the way we approach and evaluate the risks of superintelligence.
You are importantly sliding from one point to another, and this is not a topic where you can afford to do that. You can’t just tally up the markers that sort of vibe towards “how dangerous is it?” and get an answer about what to do. The arguments are individually true, or false, and what sort of world we live in depends on which specific combination of arguments are true, or false.
If it turns out there is no political will for a shut down or controlled takeoff, then we can’t have a shut down or controlled takeoff. (But that doesn’t change whether AI is likely to FOOM, or whether alignment is easy/hard)
If AI Fooms suddenly, a lot of AI alignment techniques will probably break at once. If things are gradual, smaller things may break 1-2 at a time, and maybe we get warning shots, and this buys us time. But, there’s still the question of what to do with that time.
If alignment is easy, then a reasonable plan is “get everyone to slow down for a couple years so we can do the obvious safety things, just less rushed.” If alignment is hard, that won’t work, you actually need a radically different paradigm of AI development to have any chance of not killing everyone – you may need a lot of time to figure out something new.
None of IABIED’s arguments had to do with “are warning shots possible?”, but even if they did, it is a logical fallacy to say “warning shots are possible, EY arguments arguments are less valid, therefore, this other argument that had nothing to do with warning shots is also invalid.” If you’re doing that kind of sloppy reasoning, then if you get to the warning shot world, if you don’t understand that overwhelmingly powerful superintelligence is qualitatively different from non-overwhelmingly powerful superintelligence, you might think “angle for a 1-2 year slowdown” instead of trying for a longer global moratorium.
(But, repeat, the book doesn’t say anything about whether warning shots)