If the international governing body starts approving AI development, then aren’t we basically just back in the plan A regime?
I think MIRI’s plan is clearly meant to eventually build superintelligence, given that they’ve stated various times it’d be an existential catastrophe if this never happened – they just think it should happen after a lot of augmentation and carefulness.
A lot of my point here is I just don’t really see much difference between Plan A and Shutdown except for “once you’ve established some real control over AI racing, what outcome are you shooting for nearterm?”, and I’m confused why Plan A advocates see it as substantially different.
(Or, I think the actual differences are more about “how you expect it to play out in practice, esp. if MIRI-style folk end up being a significant political force.” Which is maybe fair, but, it’s not about the core proposal IMO.)
“We wouldn’t want to pause 30 years, and then do a takeoff very quickly – it’s probably better to do a smoother takeoff.”
> huh, this one seems kinda relevant to me.
Do you understand why I don’t understand why you think that? Like, the MIRI plan is clearly aimed at eventually building superintelligence (I realize the literal treaty doesn’t emphasize that, but, it’s clear from very public writing in IABIED that it’s part of the goal), and I think it’s pretty agnostic over exactly how that shakes out.
I think MIRI’s plan is clearly meant to eventually build superintelligence, given that they’ve stated various times it’d be an existential catastrophe if this never happened – they just think it should happen after a lot of augmentation and carefulness.
A lot of my point here is I just don’t really see much difference between Plan A and Shutdown except for “once you’ve established some real control over AI racing, what outcome are you shooting for nearterm?”, and I’m confused why Plan A advocates see it as substantially different.
(Or, I think the actual differences are more about “how you expect it to play out in practice, esp. if MIRI-style folk end up being a significant political force.” Which is maybe fair, but, it’s not about the core proposal IMO.)
Do you understand why I don’t understand why you think that? Like, the MIRI plan is clearly aimed at eventually building superintelligence (I realize the literal treaty doesn’t emphasize that, but, it’s clear from very public writing in IABIED that it’s part of the goal), and I think it’s pretty agnostic over exactly how that shakes out.