Is there a decent chance an AI takeover is relatively nice?
> This is an existential catastrophe IMO and should desperately avoided, even if they do leave us a solar system or w/e.
Actually, I think this maybe wasn’t cruxy for anyone. I think @ryan_greenblatt said he agreed it didn’t change the strategic picture, it just changed some background expectations.
(I maybe don’t believe him that he doesn’t think it affects the strategic picture? It seemed like his view was fairly sensitive to various things being like 30% likely instead of like 5% or <1%, and it feels like it’s part of an overall optimistic package that adds up to being more willing to roll the dice on current proposals? But, I’d probably believe him if he reads this paragraph and is like “I have thought about whether this is a (maybe subconscious) motivation/crux and am confident it isn’t)
Not a crux for me ~at all. Some upstream views that make me think “AI takeover but humans stay alive” is more likely and also make me think avoiding AI takeover is relatively easier might be a crux.
I maybe don’t believe him that he doesn’t think it affects the strategic picture? It seemed like his view was fairly sensitive to various things being like 30% likely instead of like 5% or <1%, and it feels like it’s part of an overall optimistic package that adds up to being more willing to roll the dice on current proposals?
Insofar as you’re just assessing which strategy reduces AI takeover risk the most, there’s really no way that “how bad is takeover” could be relevant. (Other than, perhaps, having implications for how much political will is going to be available.)
“How bad is takeover?” should only be relevant when trading off “reduced risk of AI takeover” with affecting some other trade-off. (Such as risk of earth-originating intelligence going extinct, or affecting probability of US dominated vs. CCP dominated vs. international cooperation futures.) So if this was going to be a crux, I would bundle it together with your Chinese superintelligence bullet point, and ask about the relative goodness of various aligned superintelligence outcomes vs. AI takeover. (Though seems fine to just drop it since Ryan and Thomas don’t think it’s a big crux. Which I’m also sympathetic to.)
Actually, I think this maybe wasn’t cruxy for anyone. I think @ryan_greenblatt said he agreed it didn’t change the strategic picture, it just changed some background expectations.
(I maybe don’t believe him that he doesn’t think it affects the strategic picture? It seemed like his view was fairly sensitive to various things being like 30% likely instead of like 5% or <1%, and it feels like it’s part of an overall optimistic package that adds up to being more willing to roll the dice on current proposals? But, I’d probably believe him if he reads this paragraph and is like “I have thought about whether this is a (maybe subconscious) motivation/crux and am confident it isn’t)
Not a crux for me ~at all. Some upstream views that make me think “AI takeover but humans stay alive” is more likely and also make me think avoiding AI takeover is relatively easier might be a crux.
Insofar as you’re just assessing which strategy reduces AI takeover risk the most, there’s really no way that “how bad is takeover” could be relevant. (Other than, perhaps, having implications for how much political will is going to be available.)
“How bad is takeover?” should only be relevant when trading off “reduced risk of AI takeover” with affecting some other trade-off. (Such as risk of earth-originating intelligence going extinct, or affecting probability of US dominated vs. CCP dominated vs. international cooperation futures.) So if this was going to be a crux, I would bundle it together with your Chinese superintelligence bullet point, and ask about the relative goodness of various aligned superintelligence outcomes vs. AI takeover. (Though seems fine to just drop it since Ryan and Thomas don’t think it’s a big crux. Which I’m also sympathetic to.)