Item 3 has some constraints on members (emphasis mine):
Requests the Secretary-General to launch a published criteria-based open call and to recommend a list of 40 members of the Panel to be appointed by the General Assembly in a time-bound manner, on the basis of their outstanding expertise in artificial intelligence and related fields, an interdisciplinary perspective, and geographical and gender balance, taking into account candidacies from a broad representation of varying levels of technological development, including from developing countries, with due consideration to nominations from Member States and with no more than two selected candidates of the same nationality or affiliation and no employees of the United Nations system;
This means only two each from US, UK, China, etc. I wonder what the geographic and gender balance will actually look like; these will significantly influence the average expertise type and influence of members.
My guess is that x-risk mitigation will not be the primary focus at first just because over half of the experts are American and British men and there are so many other interests to represent. Nor would industry be heavily represented because it skews too American (and the document mentioned CoIs, and the 7 goals of the Dialogue are mostly not about frontier capabilities). But in the long term, unless takeoff is fast, developing countries will realize the US is marching towards DSA and interesting things could happen.
9 Americas (incl. 4 US), 11 Europe, 11 Asia (incl. 2 China), 1 Oceania, 5 Africa. They heavily overweighted Europe, which has 28% of the seats but only 9% of world population
19 high income countries, 24 LMIC
gpt5 isn’t sure about some of the dual nationality cases though
On the flip side, this means that if we do know people who are AI experts but not from the US/EU/China, forwarding this information to them so that they can apply with a higher chance of being accepted might be valuable.
Agree, especially from developing countries without a strong preexisting stance on AI, where choices could be less biased towards experts who already have lots of prestige, and more weighted on merits + lobbying.
Huh that is a really good point. There are way too many people with US/UK backgrounds to easily differentiate between the expert pretenders and the really substantial experts. It’s even getting harder to do so on LW for many topics as karma becomes less and less meaningful.
And I can’t imagine the secretary general’s office will have that much time to scrutinize each proposed candidate, so it might even be a positive thing overall.
Yes, these are the usual selection criteria constrains for policy panels. And I agree that the vast majority of big names are US (some UK) based and male. But hey, there are lesser known voices in EU policy that care about AI Safety. But I do share your concern. I’ll have the opportunity to ask about this at CAIDP at some point soon (Centre for AI and Digital Policy). I think many people would agree that it’s a good opportunity to talk about AIS awareness in less involved members states…
Item 3 has some constraints on members (emphasis mine):
This means only two each from US, UK, China, etc. I wonder what the geographic and gender balance will actually look like; these will significantly influence the average expertise type and influence of members.
My guess is that x-risk mitigation will not be the primary focus at first just because over half of the experts are American and British men and there are so many other interests to represent. Nor would industry be heavily represented because it skews too American (and the document mentioned CoIs, and the 7 goals of the Dialogue are mostly not about frontier capabilities). But in the long term, unless takeoff is fast, developing countries will realize the US is marching towards DSA and interesting things could happen.
Edit: my guess for the composition would be similar to the existing High-level Advisory Body on Artificial Intelligence, with 39 members, of which
19 men, 20 women
15 academia/research (including 10 professors), 10 government, 4 from big tech and scaling labs, 10 other
17 Responsibility / Safety / Policy oversight positions, 22 other positions
Nationalities:
9 Americas (incl. 4 US), 11 Europe, 11 Asia (incl. 2 China), 1 Oceania, 5 Africa. They heavily overweighted Europe, which has 28% of the seats but only 9% of world population
19 high income countries, 24 LMIC
gpt5 isn’t sure about some of the dual nationality cases though
1 big name in x-risk (Jaan Tallinn)
On the flip side, this means that if we do know people who are AI experts but not from the US/EU/China, forwarding this information to them so that they can apply with a higher chance of being accepted might be valuable.
Agree, especially from developing countries without a strong preexisting stance on AI, where choices could be less biased towards experts who already have lots of prestige, and more weighted on merits + lobbying.
Huh that is a really good point. There are way too many people with US/UK backgrounds to easily differentiate between the expert pretenders and the really substantial experts. It’s even getting harder to do so on LW for many topics as karma becomes less and less meaningful.
And I can’t imagine the secretary general’s office will have that much time to scrutinize each proposed candidate, so it might even be a positive thing overall.
Exactly! Thank you for highlighting this.
Yes, these are the usual selection criteria constrains for policy panels. And I agree that the vast majority of big names are US (some UK) based and male. But hey, there are lesser known voices in EU policy that care about AI Safety. But I do share your concern. I’ll have the opportunity to ask about this at CAIDP at some point soon (Centre for AI and Digital Policy). I think many people would agree that it’s a good opportunity to talk about AIS awareness in less involved members states…