Setting up a common enemy is an excellent way to engender cooperation between two competing groups. While this common enemy does not necessarily need to be a third group, the feeling of uniting against a common external threat is a powerful motivator, which can drive groups to do truly great things. We didn’t land on the moon because of inward focused warm fuzzies. We landed on the moon to show the Soviet Union we were better at rockets than they were.
In fact, the fact that there’s no Sputnik like threat warning for AGI is probably the reason that AI X-Risk research is so neglected. If we could set up an external threat on the order of Sputnik or Einstein’s letter warning of German efforts to build an atomic bomb, we’d be making huge strides figuring out whether building a friendly AI was possible.
It feels noteworthy that your historical examples are going to the moon and making the atomic bomb: the first was something that was found to be of so little practical value that it was basically just done a few times and then given up after all the symbolic value had been extracted from it, and the second was a project explicitly aimed at hurting the outgroup.
So uniting against a common enemy may drive people to do difficult things, but the value of those things may be mostly symbolic or outright aimed at being explicitly harmful.
(Though just to check, I think we don’t actually disagree on much? You said that “it’s far more productive to try to find ways to redirect tribalist impulses towards positive ends” and I said that “in-group bonding is a good and valuable thing, but it’s not obvious to me that it could not be separated from out-group aggression”, so both of us seem to be in agreement that we should keep the good sides of in/out-group dynamics and try to reduce the bad sides of it; I just define “tribalism” as referring to purely the negative sides, whereas you’re defining it to refer to the whole dynamic.)
Setting up a common enemy is an excellent way to engender cooperation between two competing groups. While this common enemy does not necessarily need to be a third group, the feeling of uniting against a common external threat is a powerful motivator, which can drive groups to do truly great things. We didn’t land on the moon because of inward focused warm fuzzies. We landed on the moon to show the Soviet Union we were better at rockets than they were.
In fact, the fact that there’s no Sputnik like threat warning for AGI is probably the reason that AI X-Risk research is so neglected. If we could set up an external threat on the order of Sputnik or Einstein’s letter warning of German efforts to build an atomic bomb, we’d be making huge strides figuring out whether building a friendly AI was possible.
It feels noteworthy that your historical examples are going to the moon and making the atomic bomb: the first was something that was found to be of so little practical value that it was basically just done a few times and then given up after all the symbolic value had been extracted from it, and the second was a project explicitly aimed at hurting the outgroup.
So uniting against a common enemy may drive people to do difficult things, but the value of those things may be mostly symbolic or outright aimed at being explicitly harmful.
(Though just to check, I think we don’t actually disagree on much? You said that “it’s far more productive to try to find ways to redirect tribalist impulses towards positive ends” and I said that “in-group bonding is a good and valuable thing, but it’s not obvious to me that it could not be separated from out-group aggression”, so both of us seem to be in agreement that we should keep the good sides of in/out-group dynamics and try to reduce the bad sides of it; I just define “tribalism” as referring to purely the negative sides, whereas you’re defining it to refer to the whole dynamic.)