Yeah, I fully expect that current level LMs will by default make the situation both better and worse. I also think that we’re still a very long way from fully utilising the things that the internet has unlocked.
My holistic take is that this approach would be very hard, but not obviously harder than aligning powerful AIs and likely complementary. I also think it’s likely we might need to do some of this ~societal uplift anyway so that we do a decent job if and when we do have transformative AI systems.
Some possible advantages over the internet case are:
People might be more motivated towards by the presence of very salient and pressing coordination problems
For example, I think the average head of a social media company is maybe fine with making something that’s overall bad for the world, but the average head of a frontier lab is somewhat worried about causing extinction
Currently the power over AI is really concentrated and therefore possibly easier to steer
A lot of what matters is specifically making powerful decision makers more informed and able to coordinate, which is slightly easier to get a handle on
As for the specific case of aligned super-coordinator AIs, I’m pretty into that, and I guess I have a hunch that there might be a bunch of available work to do in advance to lay the ground for that kind of application, like road-testing weaker versions to smooth the way for adoption and exploring form factors that get the most juice out of the things LMs are comparatively good at. I would guess that there are components of coordination where LMs are already superhuman, or could be with the right elicitation.
Yeah, I fully expect that current level LMs will by default make the situation both better and worse. I also think that we’re still a very long way from fully utilising the things that the internet has unlocked.
My holistic take is that this approach would be very hard, but not obviously harder than aligning powerful AIs and likely complementary. I also think it’s likely we might need to do some of this ~societal uplift anyway so that we do a decent job if and when we do have transformative AI systems.
Some possible advantages over the internet case are:
People might be more motivated towards by the presence of very salient and pressing coordination problems
For example, I think the average head of a social media company is maybe fine with making something that’s overall bad for the world, but the average head of a frontier lab is somewhat worried about causing extinction
Currently the power over AI is really concentrated and therefore possibly easier to steer
A lot of what matters is specifically making powerful decision makers more informed and able to coordinate, which is slightly easier to get a handle on
As for the specific case of aligned super-coordinator AIs, I’m pretty into that, and I guess I have a hunch that there might be a bunch of available work to do in advance to lay the ground for that kind of application, like road-testing weaker versions to smooth the way for adoption and exploring form factors that get the most juice out of the things LMs are comparatively good at. I would guess that there are components of coordination where LMs are already superhuman, or could be with the right elicitation.