unless the LLM-AGIs have systematically higher wisdom, cooperation, and coordination than humans do, which I don’t particularly expect
I think there’s at least one pretty solid reason to expect that: The AIs will be much smarter than the median human.
Human coordination is constrained by the fact that humans vary substantially in intelligence. For instance, most humans don’t really understand economics. I think the median human could understand basic micro with better educational interventions, but it’s certainly harder for the average human than for the cognitive elite. The fact that most people don’t understand economics makes earth’s public policy much much worse than the best ideas earth has been able to come up with.
When we have AIs that are good enough to be doing the AI research, that means as good as the smartest humans. And unlike with humans, there doesn’t have to be a wide spread of cognitive ability: the whole population of AIs could be similarly intellectually capable.
I would guess that this would make them much more effective at coordinating with each other, and collectively identifying good equilibria, even if it doesn’t make them generically wiser (though it might also make them generically wiser).
I think there’s at least one pretty solid reason to expect that: The AIs will be much smarter than the median human.
Human coordination is constrained by the fact that humans vary substantially in intelligence. For instance, most humans don’t really understand economics. I think the median human could understand basic micro with better educational interventions, but it’s certainly harder for the average human than for the cognitive elite. The fact that most people don’t understand economics makes earth’s public policy much much worse than the best ideas earth has been able to come up with.
When we have AIs that are good enough to be doing the AI research, that means as good as the smartest humans. And unlike with humans, there doesn’t have to be a wide spread of cognitive ability: the whole population of AIs could be similarly intellectually capable.
I would guess that this would make them much more effective at coordinating with each other, and collectively identifying good equilibria, even if it doesn’t make them generically wiser (though it might also make them generically wiser).