I agree that a stable equilibrium with multiple AGIs preventing each others’ FOOM ambitions is possible. I want to see more work on this, so I’m very glad you’re thinking about it. I’d be happy to help.
I think it’s not easy to plan such an equilibrium so that it’s stable for long. You mention “need to build robots fast”. I think we’ll have humanoid robots very soon that are adequate to bootstrap back to robotics in, say, a nuclear winter takeover scenario. Survival without humans won’t be a factor for long, anyway. Hoping it’s more than ten years before robots are undoubtably capable of bootstrapping to a human-free world seems unrealistic.
So having human-aligned AGIs isn’t optional. It seems like AGI equilibrium won’t include humans for long if they don’t care about human welfare or at least care about following instructions.
We do have to include every imaginary thing that’s possible, or our short-term solution will lead to a dead end in which we are dead and that’s the end.
By which I mean: it may be that a multipolar scenario leads almost certainly to doom as soon as robotics and other technology has matured enough to broaden the possibilities for FOOM. Having a galactic civilization full of AGIs capable of FOOMs seems highly unstable to me. It seems like some jackass is going to weaponize and go full self-replicator, and there won’t be a way to monitor all of space to prevent this adequately. I would love to have my mind changed!
If that’s true, we mustn’t start down the path of AGI proliferation. We need to aim for a singleton or small fixed coalition that prevents AGI proliferation. And because proliferation is increasingly hard to stop as we make progress toward AGI, we should figure that out soon.
Which is why I’m happy to see you working to propose specific routes by which multipolar scenarios can work. Most people who argue for those types of stable equilibria are optimists who just say something vague and go back to arguing for AI progress, leaving the important question almost unaddressed.
<<a dead end in which we are dead and that’s the end.
😂
<<Which is why I’m happy to see you working to propose specific routes by which multipolar scenarios can work.
Thank you! I just launched a website for the project last night actually, so it’s likely you’ll be the first to see it. Last night I went to bed feeling so burned out on the whole thing. Like many of us, this is a problem I’ve been thinking about for several years, and I’m good at building websites, but largely… Well, it’s a formidable task to say the least. I decided to launch it open source, with the thinking people more qualified than myself might one day find it and be able to take it across the finish line.
Which is not to say I’ve given up working on it myself, but at the moment my brain hurts ha.
It’s likely I’m going to continue working on it possibly next week. The whole thing is a bit of a mess at the moment but at least it’s a starting point. Hope all is well.
I agree that a stable equilibrium with multiple AGIs preventing each others’ FOOM ambitions is possible. I want to see more work on this, so I’m very glad you’re thinking about it. I’d be happy to help.
I think it’s not easy to plan such an equilibrium so that it’s stable for long. You mention “need to build robots fast”. I think we’ll have humanoid robots very soon that are adequate to bootstrap back to robotics in, say, a nuclear winter takeover scenario. Survival without humans won’t be a factor for long, anyway. Hoping it’s more than ten years before robots are undoubtably capable of bootstrapping to a human-free world seems unrealistic.
So having human-aligned AGIs isn’t optional. It seems like AGI equilibrium won’t include humans for long if they don’t care about human welfare or at least care about following instructions.
We do have to include every imaginary thing that’s possible, or our short-term solution will lead to a dead end in which we are dead and that’s the end.
By which I mean: it may be that a multipolar scenario leads almost certainly to doom as soon as robotics and other technology has matured enough to broaden the possibilities for FOOM. Having a galactic civilization full of AGIs capable of FOOMs seems highly unstable to me. It seems like some jackass is going to weaponize and go full self-replicator, and there won’t be a way to monitor all of space to prevent this adequately. I would love to have my mind changed!
If that’s true, we mustn’t start down the path of AGI proliferation. We need to aim for a singleton or small fixed coalition that prevents AGI proliferation. And because proliferation is increasingly hard to stop as we make progress toward AGI, we should figure that out soon.
Which is why I’m happy to see you working to propose specific routes by which multipolar scenarios can work. Most people who argue for those types of stable equilibria are optimists who just say something vague and go back to arguing for AI progress, leaving the important question almost unaddressed.
Hi Seth:
<<a dead end in which we are dead and that’s the end.
😂
<<Which is why I’m happy to see you working to propose specific routes by which multipolar scenarios can work.
Thank you! I just launched a website for the project last night actually, so it’s likely you’ll be the first to see it. Last night I went to bed feeling so burned out on the whole thing. Like many of us, this is a problem I’ve been thinking about for several years, and I’m good at building websites, but largely… Well, it’s a formidable task to say the least. I decided to launch it open source, with the thinking people more qualified than myself might one day find it and be able to take it across the finish line.
Which is not to say I’ve given up working on it myself, but at the moment my brain hurts ha.
https://opengravity.ai
It’s likely I’m going to continue working on it possibly next week. The whole thing is a bit of a mess at the moment but at least it’s a starting point. Hope all is well.