It doesn’t come across as sociopathically cold and calculating to me. It may come across like that to others. Some people who have never encountered effective altruism or Less Wrong might think you sociopathic, but most people don’t reflective enough to realize if they care about the overwhelming magnitude of future civilizations, or starving children far away. So, the consequences of what most others signal and believe of their own values don’t lead to consequences different than yours. The capacity to care about so many far away people seems difficult to maintain all the time, mostly because if you carried so much empathy in the forefront of your mind all the time it’d be overwhelming. Saying so about real people in particular might seem sociopathic no matter who says it.
Anyway, it at first confused me why existential risk reduction is correlated with effective altruism. Effective altruism is a common banner which promotes the values common enough to existential risk reduction and Less Wrong, such as reflective thinking, evidence-based evaluation, and far preferences for helping others through time and space. I think the x-risk reduction community makes a choice to go with effective altruism because they get a strong enough position to attract more capital: financial capital, human capital, relevant expertise, etc.
While x-risk may only get a small slice of the pie that is effective altruism, as effective altruism grows, so does the absolute size of the added support x-risk reduction receives. Also it’s the common impression effective altruists are talented and reflective folk to begin with, so if one can cross-convert their concerns from poverty reduction and global health to existential risk reduction, it helps out. Further, cause areas which would otherwise be at odds with each other accept each other within effective altruism because they all gain from cooperation with each other. For example, such efforts are coordinated by the Centre for Effective Altruism, which leads to everyone under the ‘EA’ banner receiving more attention.
Meanwhile, the existential risk reduction community doesn’t look worse by associating with effective altruism, even if it will always be a smaller part of it than poverty reduction. It’s not like associating with effective altruism costs the cause of x-risk reduction so much it would be smaller or weaker movement. Aside from the coverage of the Future Humanity Institute’s publications like Superintelligence by Nick Bostrom (and its consequences, like Elon Musk’s support), effective altruism might be boosting the profile of x-risk more than anything else.
The attitude you express towards short-term effective altruism given the magnitude and importance of post-Singularity civilization is one I’ve seen expressed by people, some from Less Wrong, within or adjacent to the effective altruist community. I think these disagreements and sentiments don’t come out much from central or mainstream coverage of effective altruism because it would look bad and be confusing to the public.
It doesn’t come across as sociopathically cold and calculating to me. It may come across like that to others. Some people who have never encountered effective altruism or Less Wrong might think you sociopathic, but most people don’t reflective enough to realize if they care about the overwhelming magnitude of future civilizations, or starving children far away. So, the consequences of what most others signal and believe of their own values don’t lead to consequences different than yours. The capacity to care about so many far away people seems difficult to maintain all the time, mostly because if you carried so much empathy in the forefront of your mind all the time it’d be overwhelming. Saying so about real people in particular might seem sociopathic no matter who says it.
Anyway, it at first confused me why existential risk reduction is correlated with effective altruism. Effective altruism is a common banner which promotes the values common enough to existential risk reduction and Less Wrong, such as reflective thinking, evidence-based evaluation, and far preferences for helping others through time and space. I think the x-risk reduction community makes a choice to go with effective altruism because they get a strong enough position to attract more capital: financial capital, human capital, relevant expertise, etc.
While x-risk may only get a small slice of the pie that is effective altruism, as effective altruism grows, so does the absolute size of the added support x-risk reduction receives. Also it’s the common impression effective altruists are talented and reflective folk to begin with, so if one can cross-convert their concerns from poverty reduction and global health to existential risk reduction, it helps out. Further, cause areas which would otherwise be at odds with each other accept each other within effective altruism because they all gain from cooperation with each other. For example, such efforts are coordinated by the Centre for Effective Altruism, which leads to everyone under the ‘EA’ banner receiving more attention.
Meanwhile, the existential risk reduction community doesn’t look worse by associating with effective altruism, even if it will always be a smaller part of it than poverty reduction. It’s not like associating with effective altruism costs the cause of x-risk reduction so much it would be smaller or weaker movement. Aside from the coverage of the Future Humanity Institute’s publications like Superintelligence by Nick Bostrom (and its consequences, like Elon Musk’s support), effective altruism might be boosting the profile of x-risk more than anything else.
The attitude you express towards short-term effective altruism given the magnitude and importance of post-Singularity civilization is one I’ve seen expressed by people, some from Less Wrong, within or adjacent to the effective altruist community. I think these disagreements and sentiments don’t come out much from central or mainstream coverage of effective altruism because it would look bad and be confusing to the public.