The world will fill itself with moral agents. Contribute to the healthiest/wisest.
Don’t grab resources you don’t know how to use. Cultivate one’s garden.
Avoiding xrisk and creating flourishing futures imply very similar strategies.
(Analogy: “don’t lie” and “don’t kill” are good heuristics for almost any goals.)
I think you are inserting a lot of ought into this is at this point.
From the writing it sounds like you are describing a world where there are a bunch of these decentralized agents sharing the world peacefully. You claim that people want to create centralised agents, I think it is not so much that people would want to create a centralized agent, it is just that a single centralized agent is a stable equilibrium in a way that a multipolar world is not.
You are right that we are starting out in a decentralized multipolar AI world right now but this will end when an AI is capable of stopping other AIs from progressing, obviously you could not allow another AI becoming more powerful than you that is not aligned with you, even if you were human-aligned. And if there is another AI around the same capability level at the same time, you obviously would collaborate in some way to stop other AIs from progressing.
Having dozens of AIs continuously racing up the RSI superintelligence level is simply not a stable world that will continue, obviously you’d fight for resources. There aren’t any solar systems with 5 different suns orbiting each other.
While I do think there are many reasons pluralism isn’t stable, increasingly unstable as information technology advances, and there might not ever meaningfully be pluralism under AGI at all (eg, there probably will be many agents working in parallel, but the agents might basically share goals and also be subject to very strong oversight in ways that humans often pretend to be but never have been), which I’d like to see Ngo acknowledge,
The period of instability is fairly likely to be the period under which the constitution of the later stage of stability is written, so it’s important that some of us try to understand it.
I think you are inserting a lot of ought into this is at this point.
From the writing it sounds like you are describing a world where there are a bunch of these decentralized agents sharing the world peacefully. You claim that people want to create centralised agents, I think it is not so much that people would want to create a centralized agent, it is just that a single centralized agent is a stable equilibrium in a way that a multipolar world is not.
You are right that we are starting out in a decentralized multipolar AI world right now but this will end when an AI is capable of stopping other AIs from progressing, obviously you could not allow another AI becoming more powerful than you that is not aligned with you, even if you were human-aligned. And if there is another AI around the same capability level at the same time, you obviously would collaborate in some way to stop other AIs from progressing.
Having dozens of AIs continuously racing up the RSI superintelligence level is simply not a stable world that will continue, obviously you’d fight for resources. There aren’t any solar systems with 5 different suns orbiting each other.
While I do think there are many reasons pluralism isn’t stable, increasingly unstable as information technology advances, and there might not ever meaningfully be pluralism under AGI at all (eg, there probably will be many agents working in parallel, but the agents might basically share goals and also be subject to very strong oversight in ways that humans often pretend to be but never have been), which I’d like to see Ngo acknowledge,
The period of instability is fairly likely to be the period under which the constitution of the later stage of stability is written, so it’s important that some of us try to understand it.