It’s a very likely conjecture that there’s no moat between getting an AI to do what’s good for one person, and getting an AI to do what’s good for humans. There is no technical difficulty getting one once you know how to get the other, because both involve the same amount of “interpreting humans the way they want to be interpreted.”
Which one we get isn’t about technical prpblems, but about the structures surrounding AGI projects.
Right, I guess that’s the main problem I’m gesturing at here. It seems pretty likely that if we create aligned AGI, there will be more than one of them (unless whoever creates the first makes a dedicated effort to prevent the creation of others).
In that circumstance, the concentration of power dynamics I described seems to still be concerning.
It’s a very likely conjecture that there’s no moat between getting an AI to do what’s good for one person, and getting an AI to do what’s good for humans. There is no technical difficulty getting one once you know how to get the other, because both involve the same amount of “interpreting humans the way they want to be interpreted.”
Which one we get isn’t about technical prpblems, but about the structures surrounding AGI projects.
Right, I guess that’s the main problem I’m gesturing at here. It seems pretty likely that if we create aligned AGI, there will be more than one of them (unless whoever creates the first makes a dedicated effort to prevent the creation of others).
In that circumstance, the concentration of power dynamics I described seems to still be concerning.