(I think at least part of what’s going on is that there is a separate common belief that Superintelligent (compared to the single best humans) is enough to bootstrap to Overwhelming Superintelligence, and some of the MIRI vs Redwood debates are about how necessarily true that is)
I don’t really understand what you’re saying. I think it’s very likely that [ETA: non-galaxy-brained] superintelligent AIs will be able to build galaxy-brained superintelligences within months to years if they are given (or can steal) the resources needed to produce them. I don’t think it’s obvious that they can do this with extremely limited resources.
I think (unconfidently guessing) that Eliezer is more bullish than you on “they can do this with pretty limited resources”, and this leads to him caring less about the distinction between “weakly superhuman” and “overwhelmingly superhuman”.
What would be keeping the resources extremely limited in this scenario? My understanding was control was always careful to specify that it was targeting the “near human level” regime.
Yeah, I think control is unlikely to work for galaxy brained superintelligences. It’s unclear how superintelligent they have to be before control is totally unworkable.
I think that’s consistent with what Buck just said. (I interpreted him to be using superintelligent AI here to mean “near human level”, and that those AIs would be able to develop successor galaxy-brain AI if they had enough resources, but, if you have sufficiently controlled them, they hopefully won’t)
(I think at least part of what’s going on is that there is a separate common belief that Superintelligent (compared to the single best humans) is enough to bootstrap to Overwhelming Superintelligence, and some of the MIRI vs Redwood debates are about how necessarily true that is)
I don’t really understand what you’re saying. I think it’s very likely that [ETA: non-galaxy-brained] superintelligent AIs will be able to build galaxy-brained superintelligences within months to years if they are given (or can steal) the resources needed to produce them. I don’t think it’s obvious that they can do this with extremely limited resources.
I think (unconfidently guessing) that Eliezer is more bullish than you on “they can do this with pretty limited resources”, and this leads to him caring less about the distinction between “weakly superhuman” and “overwhelmingly superhuman”.
What would be keeping the resources extremely limited in this scenario? My understanding was control was always careful to specify that it was targeting the “near human level” regime.
Yeah, I think control is unlikely to work for galaxy brained superintelligences. It’s unclear how superintelligent they have to be before control is totally unworkable.
I think that’s consistent with what Buck just said. (I interpreted him to be using superintelligent AI here to mean “near human level”, and that those AIs would be able to develop successor galaxy-brain AI if they had enough resources, but, if you have sufficiently controlled them, they hopefully won’t)