I don’t quite understand the plan. What if I get access to cheap friendly AI, but there’s also another much more powerful AI that wants my resources and doesn’t care much about me? What would stop the much more powerful AI from outplaying me for these resources, maybe by entirely legal means? Or is the idea that somehow the AIs in public access are always the strongest possible? That isn’t true even now.
This might be obvious, but I don’t think we have evidence to support the idea that there really is anything like a concrete plan. All of the statements I’ve seen from Sam on this issue so far are incredibly basic and hand-wavy.
I suspect that any concrete plan would be fairly controversial, so it’s easiest to speak in generalities. And I doubt there’s anything like an internal team with some great secret macrostrategy—instead I assume that they haven’t felt pressured to think through it much.
The only sane version of this I can imagine is where there’s either one aligned ASI, or a coalition of aligned ASIs, and everyone has equal access. Because the AI(s) are aligned they won’t design bioweapons for misanthropes and such, and hopefully they also won’t make all human effort meaningless by just doing everything for us and seizing the lightcone etc etc.
I don’t quite understand the plan. What if I get access to cheap friendly AI, but there’s also another much more powerful AI that wants my resources and doesn’t care much about me? What would stop the much more powerful AI from outplaying me for these resources, maybe by entirely legal means? Or is the idea that somehow the AIs in public access are always the strongest possible? That isn’t true even now.
This might be obvious, but I don’t think we have evidence to support the idea that there really is anything like a concrete plan. All of the statements I’ve seen from Sam on this issue so far are incredibly basic and hand-wavy.
I suspect that any concrete plan would be fairly controversial, so it’s easiest to speak in generalities. And I doubt there’s anything like an internal team with some great secret macrostrategy—instead I assume that they haven’t felt pressured to think through it much.
The only sane version of this I can imagine is where there’s either one aligned ASI, or a coalition of aligned ASIs, and everyone has equal access. Because the AI(s) are aligned they won’t design bioweapons for misanthropes and such, and hopefully they also won’t make all human effort meaningless by just doing everything for us and seizing the lightcone etc etc.
Seems bad to posit that there must be a sane version.