Thank you, these are all good points. There are clearly some assumptions that I see should have been made explicit.
The U.S. government gives a clear priority to the race for global AI dominance. I wouldn’t go as far as saying this effort has clear bipartisan support, but even the Democrats have a preference for retaining influence in the area.
I see a contrast between that appetite for AI dominance and the actual policies. Multilateralism has historically been an effective tool in building and supporting U.S. influence, and it is being heavily underused in this case.
Would the U.S. dominance be a good thing? That is a complex matter. But I do see how my writing makes it look like I think the U.S.-led order would be the positive outcome by default. I don’t think it is, but that would be a separate post.
Come to think of it, the proper title would be “Why is the U.S. bent on missing its AI Bretton Woods moment?”, because I am interested in how sucha a decision is made. Yes, it is just one example of the general trend of transactionalism replacing long-term influence-building under the Trump leadership. I still find it fascinating, though.
If you believe AGI is imminent, then of course you want to develop policies that will address the related problem. I do not think we should summarily dismiss the potential risks of AGI, and I do say explicitly that I am not arguing about the AGI timelines or probabilities here. What I do argue is that we should not base our belief in imminent AGI—and therefore policy choices—solely on the messaging from the AI industry players. And they, whether we like it or not, are now to great extent shaping the public discourse.
Which of these two approaches:
Sam Altman says AGI is coming > let’s focus all policy effort and resources on his scenarios of AGI, or
AGI is potentially coming > let’s review arguments and research from across the field to weigh the probabilities > let’s distribute policy effort and resources across short-, mid-, and long-term risks accordingly
do you think will bear better policy choices? Approach #2 may well conclude that AGI is extremely likely, but these conclusions will have a sounder and broader base.