I’ve been getting into more general political theory recently and I really like the idea of multi-lateralism and it feels a bit underrepresented in the LW rhetoric maybe due to the US-centricity of the site? (I liked this interview with finland’s prime minister, I thought he was quite well spoken: https://youtu.be/ubZeguAk0fM?si=7H7nJfnCANCWRcDN)
The difference is basically between being driven by cooperation, values, norms and treaties on the multi-lateralist side and power on the multipolar side. It feels like a lot of the analysis has been based on power and this is especially true in US-China relations. This just feels obviously worse than aiming for a multi-lateral world order and based on some sort of power-concentrating assumption?
Maybe it is also partly due to the unipolar world that we have had from beforehand with the US as a global hegemon?
Some might say that it is implausible to aim for multi-lateralism and that power concentration is a fact of the world due to how ASI will arise. I do not believe this to be the case, distillation of models exist, LLMs are highly parallelisable, these are all things that point at a broad usage of LLMs in the future.
Finally, there is a serious possibility at this point that the USA will grow into a proto-fascist state since it is starting to get easier and easier to predict by viewing it from a fascist lens.
Power generally tends to corrupt and distribution of power is often a good thing as it enables leverage for deals and cooperation. Maybe this is like a mega lukewarm take but I feel that some people are still stuck in the “we need a manhattan project for AI Safety” train of thought. Also, finally, I would really like to see Anthropic or Google Deepmind or any AI company for that matter involve themselves a lot more in improving democracy across the world and for them to become a lot more globalist. This is a strategy change that is plausible to implement and it will likely decrease the risks from power concentration as it seems states are getting quite grabby in this changing world order.
If I was Dario Amodei in the future bestseller Anthropic and The Methods of Rationality I would start to create multi-lateral cooperation across the world as that will build lots of leverage and good will for future adoption of technologies even with the US since you could use your global relationships to get leverage on national decisions.
One distinction worth making is between multiple pecking orders, multiple causal-decision-theoretic agents, and multiple logical-decision-theoretic agents. How many pecking orders this planet can sustain is an empirical question, and one that may have a very different answers at different tech levels, not necessarily along a monotonic trend (e.g. airplanes and satellites push towards unipolarity or some sort of kayfabe multipolarity, but then em cities might create advantages to hyperlocal agglomeration until you bump up against heat diffusion limits and there might be room for plenty of those).
At the limit, rational agents who can negotiate with each other and adjust their beliefs and make binding commitments should form a single logical-decision-theoretic agent with many differently informed and specialized local information processing nodes. This would be a singleton constituted by autonomous agents in a situation of radical equality, very different from a single dominant pecking order, but depending on how you count maximally unipolar or maximally multilateral.
“At the limit, rational agents who can negotiate with each other and adjust their beliefs and make binding commitments should form a single logical-decision-theoretic agent with many differently informed and specialized local information processing nodes”
(Can’t seem to switch from markdown so no inline)
I think that a question that this raises is if this should then be considered one larger agent or a collection of subagents? Is it not good for flexibility and resillience if the local nodes are able to take adaptive action over time?
I think we get into some very fun territory of distributed agency and hierarchical agency here.
Many nodes being a single logical agent is ideally compatible with them taking the sorts of adaptive actions over time consistent with being different causal (forwards-in-time) agents.
Could you link onto places or give a definition that makes these a little clearer, are we saying they act in equivalent ways with a given decision theory or how are you defining this?
Next time you link a 40 minutes long video with an introduction that is unrelated to the point you are making, could you please add a starting time? I watched the first 10 minutes, then gave up.
So not sure whether this is relevant, but to me “multi-lateralism” sounds like a dog whistle for making Russia great again. At least, whenever people mention something like that, in my experience it always implies that we should somehow help Russia become a world superpower. I mean, people talk about the world being unilateral or multilateral, but when you listen to them for a longer time, it becomes clear that they would consider the world with USA and Russia being the only big players as sufficiently multilateral, while a world with e.g. USA, EU, China, India being big players and Russia a small player is insufficiently multilateral for them.
From my perspective… well, the world in the 1984 novel was technically multilateral, so it is not necessarily a good thing.
I really like the idea of multi-lateralism and it feels a bit underrepresented in the LW rhetoric maybe due to the US-centricity of the site?
The difference is basically between being driven by cooperation, values, norms and treaties on the multi-lateralist side and power on the multipolar side. It feels like a lot of the analysis has been based on power and this is especially true in US-China relations. This just feels obviously worse than aiming for a multi-lateral world order and based on some sort of power-concentrating assumption?
I’ve been getting into more general political theory recently and I really like the idea of multi-lateralism and it feels a bit underrepresented in the LW rhetoric maybe due to the US-centricity of the site? (I liked this interview with finland’s prime minister, I thought he was quite well spoken: https://youtu.be/ubZeguAk0fM?si=7H7nJfnCANCWRcDN)
The difference is basically between being driven by cooperation, values, norms and treaties on the multi-lateralist side and power on the multipolar side. It feels like a lot of the analysis has been based on power and this is especially true in US-China relations. This just feels obviously worse than aiming for a multi-lateral world order and based on some sort of power-concentrating assumption?
Maybe it is also partly due to the unipolar world that we have had from beforehand with the US as a global hegemon?
Some might say that it is implausible to aim for multi-lateralism and that power concentration is a fact of the world due to how ASI will arise. I do not believe this to be the case, distillation of models exist, LLMs are highly parallelisable, these are all things that point at a broad usage of LLMs in the future.
Finally, there is a serious possibility at this point that the USA will grow into a proto-fascist state since it is starting to get easier and easier to predict by viewing it from a fascist lens.
Power generally tends to corrupt and distribution of power is often a good thing as it enables leverage for deals and cooperation. Maybe this is like a mega lukewarm take but I feel that some people are still stuck in the “we need a manhattan project for AI Safety” train of thought. Also, finally, I would really like to see Anthropic or Google Deepmind or any AI company for that matter involve themselves a lot more in improving democracy across the world and for them to become a lot more globalist. This is a strategy change that is plausible to implement and it will likely decrease the risks from power concentration as it seems states are getting quite grabby in this changing world order.
If I was Dario Amodei in the future bestseller Anthropic and The Methods of Rationality I would start to create multi-lateral cooperation across the world as that will build lots of leverage and good will for future adoption of technologies even with the US since you could use your global relationships to get leverage on national decisions.
End of rant, European out.
One distinction worth making is between multiple pecking orders, multiple causal-decision-theoretic agents, and multiple logical-decision-theoretic agents. How many pecking orders this planet can sustain is an empirical question, and one that may have a very different answers at different tech levels, not necessarily along a monotonic trend (e.g. airplanes and satellites push towards unipolarity or some sort of kayfabe multipolarity, but then em cities might create advantages to hyperlocal agglomeration until you bump up against heat diffusion limits and there might be room for plenty of those).
At the limit, rational agents who can negotiate with each other and adjust their beliefs and make binding commitments should form a single logical-decision-theoretic agent with many differently informed and specialized local information processing nodes. This would be a singleton constituted by autonomous agents in a situation of radical equality, very different from a single dominant pecking order, but depending on how you count maximally unipolar or maximally multilateral.
“At the limit, rational agents who can negotiate with each other and adjust their beliefs and make binding commitments should form a single logical-decision-theoretic agent with many differently informed and specialized local information processing nodes”
(Can’t seem to switch from markdown so no inline)
I think that a question that this raises is if this should then be considered one larger agent or a collection of subagents? Is it not good for flexibility and resillience if the local nodes are able to take adaptive action over time?
I think we get into some very fun territory of distributed agency and hierarchical agency here.
Many nodes being a single logical agent is ideally compatible with them taking the sorts of adaptive actions over time consistent with being different causal (forwards-in-time) agents.
Could you link onto places or give a definition that makes these a little clearer, are we saying they act in equivalent ways with a given decision theory or how are you defining this?
Next time you link a 40 minutes long video with an introduction that is unrelated to the point you are making, could you please add a starting time? I watched the first 10 minutes, then gave up.
So not sure whether this is relevant, but to me “multi-lateralism” sounds like a dog whistle for making Russia great again. At least, whenever people mention something like that, in my experience it always implies that we should somehow help Russia become a world superpower. I mean, people talk about the world being unilateral or multilateral, but when you listen to them for a longer time, it becomes clear that they would consider the world with USA and Russia being the only big players as sufficiently multilateral, while a world with e.g. USA, EU, China, India being big players and Russia a small player is insufficiently multilateral for them.
From my perspective… well, the world in the 1984 novel was technically multilateral, so it is not necessarily a good thing.
Without touching the object-level, predominance of the latter view (at least when it comes to the top level of the “civilizational hierarchy”) is what you’d naturally expect from a culture that is deeply into very deep atheism and convergent extinction-level-Goodheart[1] power-seeking consequentialist cognition.
credit to Vojta Kovařík for this term