I have been in some similar situations and did some thinking about why these disagreements/frictions occur. I believe one of the cruxes is that the general humanities conception of AI progress is static while the general LessWrong conception of AI progress is dynamic. This means that humanities/ethics researchers assume that the current state of AI is “about as good as it will get” at least for the near future, and so focus on how AI is deployed right now and its direct impacts on communities today. LessWrong on the other hand looks forward and believes that the current state of AI is just a brief transition into something much more radical and dangerous.
Because of this difference in conceptions, AI Ethics researchers think LessWrongers are doom-mongers focused on far-future hypotheticals, while LessWrongers look at the AI Ethics crowd and see people trying to repair a fence when a tsunami is bearing down the whole town. Both crowds can then look at the same data and draw entirely different conclusions (see, for example, Ed Zitron’s reporting on OpenAI vs. Zvi’s newsletters). If, for example, it became undeniably clear that terminators will be marching on the streets in a week, I would expect AI ethics folks to become very similar to AI safety people in their concerns. Similarly, if extremely strong evidence emerges that scaling laws definitively break down and deep learning is a dead end, I would expect AI safety people to find arguments about social harm much more actionable and immediately relevant.
I happen to believe (since I am posting on this forum) that the dynamic conception of AI progress is correct. However, I also have a social science/humanities background and find that AI ethics and AI safety have highly compatible concerns. In particular both crowds are justifiably worried about disempowerment due to agentic systems (link to a paper I co-authored), potential physical or social danger to individuals and communities, and the idea of losing control of the future via e.g. power concentration.
Hopefully this explanation resolves some of the confusion and feelings of injury that can come from two groups with different fundamental beliefs trying to study the same phenomenon. If we can make this difference in assumptions clear as well as explain why we believe that AI progress is dynamic, I think it would go some way towards diffusing this (in my opinion) unnecessary and artificial tension between the communities.
I have been in some similar situations and did some thinking about why these disagreements/frictions occur. I believe one of the cruxes is that the general humanities conception of AI progress is static while the general LessWrong conception of AI progress is dynamic. This means that humanities/ethics researchers assume that the current state of AI is “about as good as it will get” at least for the near future, and so focus on how AI is deployed right now and its direct impacts on communities today. LessWrong on the other hand looks forward and believes that the current state of AI is just a brief transition into something much more radical and dangerous.
Because of this difference in conceptions, AI Ethics researchers think LessWrongers are doom-mongers focused on far-future hypotheticals, while LessWrongers look at the AI Ethics crowd and see people trying to repair a fence when a tsunami is bearing down the whole town. Both crowds can then look at the same data and draw entirely different conclusions (see, for example, Ed Zitron’s reporting on OpenAI vs. Zvi’s newsletters). If, for example, it became undeniably clear that terminators will be marching on the streets in a week, I would expect AI ethics folks to become very similar to AI safety people in their concerns. Similarly, if extremely strong evidence emerges that scaling laws definitively break down and deep learning is a dead end, I would expect AI safety people to find arguments about social harm much more actionable and immediately relevant.
I happen to believe (since I am posting on this forum) that the dynamic conception of AI progress is correct. However, I also have a social science/humanities background and find that AI ethics and AI safety have highly compatible concerns. In particular both crowds are justifiably worried about disempowerment due to agentic systems (link to a paper I co-authored), potential physical or social danger to individuals and communities, and the idea of losing control of the future via e.g. power concentration.
Hopefully this explanation resolves some of the confusion and feelings of injury that can come from two groups with different fundamental beliefs trying to study the same phenomenon. If we can make this difference in assumptions clear as well as explain why we believe that AI progress is dynamic, I think it would go some way towards diffusing this (in my opinion) unnecessary and artificial tension between the communities.