Identifying this as ‘high value’ reminds me of a bit from Hamming’s You and Your Research:
The three outstanding problems in physics, in a certain sense, were never worked on while I was at Bell Labs. By important I mean guaranteed a Nobel Prize and any sum of money you want to mention. We didn’t work on (1) time travel, (2) teleportation, and (3) antigravity. They are not important problems because we do not have an attack. It’s not the consequence that makes a problem important, it is that you have a reasonable attack. That is what makes a problem important.
While there’s definitely value is saying “hmm, this fruit looks tasty” even if there’s no obvious path to the fruit, in the hopes that someone else sees some easy way to get it, I think most people use ‘high value cause area’ to instead mean something like ‘high profit cause area’—the value of working on that cause is high, as opposed to the value of completing the cause being high.
I think I’m deeply pessimistic about our prospects for reducing tribalism in the short-term; the mechanisms for previous approaches were things that take historical timescales to have visible effects. The idea of widespread positive psychology is also not terribly new; I view most religions and philosophies as attempting something in this vein, and they’re engaged in memetic warfare with ideologies that encourage tribalism.
Hamming’s examples seem to be in a different category, in that in those nobody has any clue of what could even plausibly lead towards those goals. (I believe: I’m not terribly familiar with physics.) Whereas with tribalism, there seem to be a lot more leads, including i) knowledge about what causes it and what has contributed to changes of it over time ii) research directions that could help further improve our understanding of what causes it / what doesn’t cause it iii) various interventions which already seem like they work in a small-scale setting, though it’s still unclear how they might be scaled up (e.g. something like Crucial Conversations is basically about increasing trust and safety in one-to-one and small-group conversations).
I would say that this definitely looks at least as tractable as, say, MIRI-style AI safety work: that is, there are lots of plausible directions through which one could attack it, and though none of them seems likely to work out and provide us a solution in the short term, it seems plausible (though not guaranteed) that they would allow us to figure out a solution in the long term.
knowledge about what causes it and what has contributed to changes of it over time
I feel like there’s conservation of expected evidence stuff going on there—it’s because we know a lot about how gravity works that we think anti-gravity is impossible. Similarly, I’m pessimistic about extending trendlines from the past because of the likely causal factors of those trendlines.
For example, it looks to me like a pretty large part of the reduction in tribalism was a smoothing of genetic relatedness. It’s not just that cousin marriage bans reduce clannishness, because there are fewer people less than three cousins away from you, but also that your degree of relatedness to the average person in your society goes up—you have more fourth cousins. But the free movement of people cuts against that trend; decreasing ethnic homogeneity also decreases trust. Which puts us in a bind—there are many nice things about the free movement of people, and paying attention to ethnic homogeneity is itself a tribal signal.
I would say that this definitely looks at least as tractable as, say, MIRI-style AI safety work: that is, there are lots of plausible directions through which one could attack it, and though none of them seems likely to work out and provide us a solution in the short term, it seems plausible (though not guaranteed) that they would allow us to figure out a solution in the long term.
I think a core difference between MIRI-style AI safety work and this is that MIRI is trying to figure out what non-adversarial reasoning looks like in a mostly non-adversarial environment, whereas the ‘non-tribal’ forces have to do their figuring out in a mostly adversarial environment.
For example, one thing that you identify as a cause of tribalism is threat; when people feel less secure, they care more about who their friends and enemies are, and who can be relied on, and similar things. One might hope that we could put out messages that make people feel more secure and thus less likely to fall into the trap of tribalism. But those messages don’t exist in a vacuum; those messages compete with messages put out by tribalists who know that threat is one of their major recruiting funnels, and who thus are trying to increase the level of threat. This direct opposition is a major factor making me pessimistic.
I feel like there’s conservation of expected evidence stuff going on there—it’s because we know a lot about how gravity works that we think anti-gravity is impossible. Similarly, I’m pessimistic about extending trendlines from the past because of the likely causal factors of those trendlines.
Much of the recent increase in tribalism seems to have been driven by the rise of social media, though; and it’s highly unobvious that you couldn’t have a form of social media that didn’t contribute to equally toxic dynamics. Similarly, although it remains debated whether or not depression etc. have actually become more common or not, it seems like a reasonable guess that they might have.
It’s not likely gravity where we’ve gotten most of the domain figured out and we’re not coming up with anything new; rather, the domain is changing all the time, and there are various identifiable factors that are contributing to the problem and pushing it in different directions, and these have changed over time due to various causal forces that we can identify.
I think a core difference between MIRI-style AI safety work and this is that MIRI is trying to figure out what non-adversarial reasoning looks like in a mostly non-adversarial environment, whereas the ‘non-tribal’ forces have to do their figuring out in a mostly adversarial environment.
Clearly there is some degree of adversariality going on with the problem. But while there are a lot of people who benefit from tribalism to some extent, it doesn’t seem obvious that they wouldn’t turn from adversaries to allies if you gave them an even better solution for dealing with their problems.
E.g. I spoke with someone who had done research on some of the nastier SJW groups and had managed to get into some private Facebook groups from which outrage campaigns were being coordinated. He said that his impression was that much of this was being driven by a small group of individuals who seemed to be doing really badly in life in general, and who were doing the outrage stuff as some kind of a coping mechanism. If that’s correct, then while those people would probably like to maintain tribalism as a way to maintain their coping mechanism, even they would probably prefer to have their actual problems fixed. And in fact, if the right person just approached them and offered to help them with their actual problems, it’s unlikely that they’d even perceive that as being in opposition to the outrage stuff they were doing—even if getting that help would in fact cause them to lose interest in the outrage stuff.
Also, I obviously don’t have a representative sample here, but it feels like there are a lot more people who hate the tribal climate on social media than there are people who would like to maintain it. Most people don’t know what to do about it (and in fact aren’t speaking up because they are scared that they’d become targeted if they did), but would be happy to help out with reducing it if they just knew how.
Identifying this as ‘high value’ reminds me of a bit from Hamming’s You and Your Research:
While there’s definitely value is saying “hmm, this fruit looks tasty” even if there’s no obvious path to the fruit, in the hopes that someone else sees some easy way to get it, I think most people use ‘high value cause area’ to instead mean something like ‘high profit cause area’—the value of working on that cause is high, as opposed to the value of completing the cause being high.
I think I’m deeply pessimistic about our prospects for reducing tribalism in the short-term; the mechanisms for previous approaches were things that take historical timescales to have visible effects. The idea of widespread positive psychology is also not terribly new; I view most religions and philosophies as attempting something in this vein, and they’re engaged in memetic warfare with ideologies that encourage tribalism.
Hamming’s examples seem to be in a different category, in that in those nobody has any clue of what could even plausibly lead towards those goals. (I believe: I’m not terribly familiar with physics.) Whereas with tribalism, there seem to be a lot more leads, including i) knowledge about what causes it and what has contributed to changes of it over time ii) research directions that could help further improve our understanding of what causes it / what doesn’t cause it iii) various interventions which already seem like they work in a small-scale setting, though it’s still unclear how they might be scaled up (e.g. something like Crucial Conversations is basically about increasing trust and safety in one-to-one and small-group conversations).
I would say that this definitely looks at least as tractable as, say, MIRI-style AI safety work: that is, there are lots of plausible directions through which one could attack it, and though none of them seems likely to work out and provide us a solution in the short term, it seems plausible (though not guaranteed) that they would allow us to figure out a solution in the long term.
I feel like there’s conservation of expected evidence stuff going on there—it’s because we know a lot about how gravity works that we think anti-gravity is impossible. Similarly, I’m pessimistic about extending trendlines from the past because of the likely causal factors of those trendlines.
For example, it looks to me like a pretty large part of the reduction in tribalism was a smoothing of genetic relatedness. It’s not just that cousin marriage bans reduce clannishness, because there are fewer people less than three cousins away from you, but also that your degree of relatedness to the average person in your society goes up—you have more fourth cousins. But the free movement of people cuts against that trend; decreasing ethnic homogeneity also decreases trust. Which puts us in a bind—there are many nice things about the free movement of people, and paying attention to ethnic homogeneity is itself a tribal signal.
I think a core difference between MIRI-style AI safety work and this is that MIRI is trying to figure out what non-adversarial reasoning looks like in a mostly non-adversarial environment, whereas the ‘non-tribal’ forces have to do their figuring out in a mostly adversarial environment.
For example, one thing that you identify as a cause of tribalism is threat; when people feel less secure, they care more about who their friends and enemies are, and who can be relied on, and similar things. One might hope that we could put out messages that make people feel more secure and thus less likely to fall into the trap of tribalism. But those messages don’t exist in a vacuum; those messages compete with messages put out by tribalists who know that threat is one of their major recruiting funnels, and who thus are trying to increase the level of threat. This direct opposition is a major factor making me pessimistic.
Much of the recent increase in tribalism seems to have been driven by the rise of social media, though; and it’s highly unobvious that you couldn’t have a form of social media that didn’t contribute to equally toxic dynamics. Similarly, although it remains debated whether or not depression etc. have actually become more common or not, it seems like a reasonable guess that they might have.
It’s not likely gravity where we’ve gotten most of the domain figured out and we’re not coming up with anything new; rather, the domain is changing all the time, and there are various identifiable factors that are contributing to the problem and pushing it in different directions, and these have changed over time due to various causal forces that we can identify.
Clearly there is some degree of adversariality going on with the problem. But while there are a lot of people who benefit from tribalism to some extent, it doesn’t seem obvious that they wouldn’t turn from adversaries to allies if you gave them an even better solution for dealing with their problems.
E.g. I spoke with someone who had done research on some of the nastier SJW groups and had managed to get into some private Facebook groups from which outrage campaigns were being coordinated. He said that his impression was that much of this was being driven by a small group of individuals who seemed to be doing really badly in life in general, and who were doing the outrage stuff as some kind of a coping mechanism. If that’s correct, then while those people would probably like to maintain tribalism as a way to maintain their coping mechanism, even they would probably prefer to have their actual problems fixed. And in fact, if the right person just approached them and offered to help them with their actual problems, it’s unlikely that they’d even perceive that as being in opposition to the outrage stuff they were doing—even if getting that help would in fact cause them to lose interest in the outrage stuff.
Also, I obviously don’t have a representative sample here, but it feels like there are a lot more people who hate the tribal climate on social media than there are people who would like to maintain it. Most people don’t know what to do about it (and in fact aren’t speaking up because they are scared that they’d become targeted if they did), but would be happy to help out with reducing it if they just knew how.