Given that AGI seems imminent and there’s no currently good alignment plan, is there any value to discussing what it might take to keep/move the most humans out of the way? I don’t want to discourage us steering the car out of the crash, so by all means we should keep looking for a good alignment plan, but seat belts are also a good idea?
As an example: I don’t particularly like ants in my house, but as a superior intellect to ants we’re not going about trying to exterminate them off the face of the Earth, even if mosquitoes are another story. Exterminating all ants just doesn’t help achieve my goals. It’s a huge amount of resource use that I don’t really care to spend time on. Ants are thriving in a world filled with superintelligence (though of course humans are much more similar to ants than an AGI would be to us).
Assuming we fail at alignment, but AGI is not explicitly trying to exterminate every single human or make the planet uninhabitable as its underlying goals, perhaps humans can just try and stay out of the way? Is it valuable to spend time on what groups of human strategies might cause potential AGI the least amount of grief or be the most out of the way?
Perhaps there are two angles to this question: (1) how can humans in general be as ant-like in the above dynamic as possible? (2) if you were a peaceful mosquito who had sworn off bothering humans, how could you make yourself, friends, family, loved ones, anyone who will listen, least likely to be exterminated alongside bothersome mosquitoes?
As hyperbole to demonstrate the point, e.g. I feel like information workers in S.F. or military personnel in D.C. are more likely to cause an AGI grief than uncontacted tribes on remote islands. An AGI may not decide to invest the energy to deal with the folks on the islands, especially if they are compliant and want to stay there.
Given that AGI seems imminent and there’s no currently good alignment plan, is there any value to discussing what it might take to keep/move the most humans out of the way? I don’t want to discourage us steering the car out of the crash, so by all means we should keep looking for a good alignment plan, but seat belts are also a good idea?
As an example: I don’t particularly like ants in my house, but as a superior intellect to ants we’re not going about trying to exterminate them off the face of the Earth, even if mosquitoes are another story. Exterminating all ants just doesn’t help achieve my goals. It’s a huge amount of resource use that I don’t really care to spend time on. Ants are thriving in a world filled with superintelligence (though of course humans are much more similar to ants than an AGI would be to us).
Assuming we fail at alignment, but AGI is not explicitly trying to exterminate every single human or make the planet uninhabitable as its underlying goals, perhaps humans can just try and stay out of the way? Is it valuable to spend time on what groups of human strategies might cause potential AGI the least amount of grief or be the most out of the way?
Perhaps there are two angles to this question: (1) how can humans in general be as ant-like in the above dynamic as possible? (2) if you were a peaceful mosquito who had sworn off bothering humans, how could you make yourself, friends, family, loved ones, anyone who will listen, least likely to be exterminated alongside bothersome mosquitoes?
As hyperbole to demonstrate the point, e.g. I feel like information workers in S.F. or military personnel in D.C. are more likely to cause an AGI grief than uncontacted tribes on remote islands. An AGI may not decide to invest the energy to deal with the folks on the islands, especially if they are compliant and want to stay there.