Ignoring the serious ethical issues inherent in manipulating people’s emotions for instrumental gain, this strategy seems highly (I’d say 95%+) likely to backfire. Intergroup relations research shows that strong us-vs-them dynamics leads to radicalization and loss of control of social movements, and motivated reasoning literature demonstrates that identity-defining beliefs inhibit evidence-based reasoning. Moreover, even if this somehow worked, cultivating hatred of Silicon Valley and Big Tech would likely lead to the persecution of EY-types and other AI safety researchers with the most valuable insights on the matter.
Moreover, even if this somehow worked, cultivating hatred of Silicon Valley and Big Tech would likely lead to the persecution of EY-types and other AI safety researchers with the most valuable insights on the matter.
Ignoring the serious ethical issues inherent in manipulating people’s emotions for instrumental gain, this strategy seems highly (I’d say 95%+) likely to backfire. Intergroup relations research shows that strong us-vs-them dynamics leads to radicalization and loss of control of social movements, and motivated reasoning literature demonstrates that identity-defining beliefs inhibit evidence-based reasoning. Moreover, even if this somehow worked, cultivating hatred of Silicon Valley and Big Tech would likely lead to the persecution of EY-types and other AI safety researchers with the most valuable insights on the matter.
I just skimmed it. It’s not obvious to me why more radicalisation and less evidence-based reasoning is a problem.
The bottleneck is not lack of evidence, it’s lack of fear.
I can live with this.
I will read the rest of your links in a while.