I think there are defensible ways you could spin this, like:
“Inequality is likely to lead to more distributional conflicts that could metasize into existentially risky conflicts” or
“Unchecked rich countries can take more risks with the rest of the world’s population (as with e.g. global warming, or other potentially more existentially risky tradeoffs) that they could not if there were more international parity” or most plausibly (of things I can think of off the top of my head)
“The coexistence of intensive economic growth (in new technologies), extensive economic growth (in greater total population and urban concentration of such), and poverty greatly increase the probability of a global pandemic.”
But it seems pretty clear from all the other phrases written down that these are just meant to be applause lights, so maybe this is an exercise in excessive charity. (Not that applause lights might not have their place; for instance, a speech meant to convince layfolk that existential risk is important.)
I have attacked this specific applause light, because it seems to me dangerous with regards to some existential risks.
Imagine for example that a giant meteor is falling on Earth… but we have divided all resources evenly between the 7 000 000 000 inhabitants of the planet, and there is no way to finance a defense against this danger. You would need too many people to agree to put their resources together—and you can’t get enough people to agree. Game over. With more inequality, this specific existential risk could have been avoided.
Sure, for any decision you can invent a “what if” scenario where that specific decision appears wrong. But dividing all resources equally would probably create more problems that it would solve. Starting with: many people would waste their resources soon, so a new redistribution would be necessary every week.
(This is not an argument of right-wing politics, or at least I am trying not to make it. I am perfectly OK with equality, as long as it can work well. Unfortunately, just like with superhuman AI constructions, a random solution is probably very bad. We need to think hard to find a good solution. And I wouldn’t trust someone with too many applause lights to do this correctly.)
I agree that there are potential upsides and downsides to most everything. But it does seem facially unlikely that a high degree of equality will ever be achieved without institutions that are very effective at financing public goods (leaving aside the possibility of resources becoming so abundant that wealth becomes meaningless, in which case this particular problem is also solved.)
A high degree of equality could be also achieved by a hypothetical institution powerful enough to redistribute everything, and yet very uneffective at financing public goods.
Please be more specific: What kinds of inequality are how much of a threat?
I think there are defensible ways you could spin this, like:
“Inequality is likely to lead to more distributional conflicts that could metasize into existentially risky conflicts” or
“Unchecked rich countries can take more risks with the rest of the world’s population (as with e.g. global warming, or other potentially more existentially risky tradeoffs) that they could not if there were more international parity” or most plausibly (of things I can think of off the top of my head)
“The coexistence of intensive economic growth (in new technologies), extensive economic growth (in greater total population and urban concentration of such), and poverty greatly increase the probability of a global pandemic.”
But it seems pretty clear from all the other phrases written down that these are just meant to be applause lights, so maybe this is an exercise in excessive charity. (Not that applause lights might not have their place; for instance, a speech meant to convince layfolk that existential risk is important.)
I have attacked this specific applause light, because it seems to me dangerous with regards to some existential risks.
Imagine for example that a giant meteor is falling on Earth… but we have divided all resources evenly between the 7 000 000 000 inhabitants of the planet, and there is no way to finance a defense against this danger. You would need too many people to agree to put their resources together—and you can’t get enough people to agree. Game over. With more inequality, this specific existential risk could have been avoided.
Sure, for any decision you can invent a “what if” scenario where that specific decision appears wrong. But dividing all resources equally would probably create more problems that it would solve. Starting with: many people would waste their resources soon, so a new redistribution would be necessary every week.
(This is not an argument of right-wing politics, or at least I am trying not to make it. I am perfectly OK with equality, as long as it can work well. Unfortunately, just like with superhuman AI constructions, a random solution is probably very bad. We need to think hard to find a good solution. And I wouldn’t trust someone with too many applause lights to do this correctly.)
I agree that there are potential upsides and downsides to most everything. But it does seem facially unlikely that a high degree of equality will ever be achieved without institutions that are very effective at financing public goods (leaving aside the possibility of resources becoming so abundant that wealth becomes meaningless, in which case this particular problem is also solved.)
A high degree of equality could be also achieved by a hypothetical institution powerful enough to redistribute everything, and yet very uneffective at financing public goods.
When everyone has nothing, that too is equality.