Is there someone you’d point to as being a better “strategic thinker on the topic of existential risk from AGI”, as is the topic of discussion in this thread?
Good question. ARE there any A-tier strategists at all on x-risk? I’d nominate Stuart Russel. Hm. Even Yoshua Bengio is arguably also having a larger impact than Eliezer in some critical areas (policy).
For pure strategic competence, Amandeep Singh Gill.
Russell, Bengio, and Tallinn are good but not in the same league as Yudkowsky in terms of strategic thinking about AGI X-derisking. A quick search of Gill doesn’t turn up anything about existential risk but I could very easily have missed it.
Okay, I think I see the confusion. Your phrasing make it seem (to me at least) like Eliezer has had the biggest strategic impact on mitigating x-risk, and arguably also being the most competent there. I would really not be sure of that. But if we talk about strategically dissecting x-risk, without necessarily mitigating it, directly or indirectly, then maybe Eliezer would win. Still would maybe lean towards Stuart.
Gill IS having an impact that de facto mitigates x-risk, whether he uses the term or not. But he is not making people talk about it (without necessarily doing anything about it) as much as Eliezer. In that sense one could argue he isn’t really an x-risk champ.
What? I have never heard of this person, and the little I have read suggests he is deeply deeply confused about the nature of AGI. This doesn’t feel like a serious suggestion.
If one of your central takeaways from AI is that it is “going to help accelerate the process of achieveing the UN’s Sustainable Development Goals” then you are deeply miscalibrated about the impact of AI.
It’s like saying that “the industrial revolution could help improve the efficiency of chariot production”. Bro, there is going to be no chariots after the industrial revolution. There are also going to be no more sustainable development goals post-ASI.
Like, it’s a random quote, maybe he had more context that makes it make more sense, but it’s the only object level take of his I could find on his Wikipedia page. If he has more relevant things to say, they didn’t make it into the things I could quickly find out about him, but my first skim of things strongly suggested someone who lacks situational awareness (in the https://situational-awareness.ai/ sense).
First sentence under work: “Gill has written about the impact of artificial intelligence (AI) on modern life and the necessity for establishing appropriate regulatory frameworks to ensure AI plays a positive role in the future.”
He is the UN envoy. He is in policy, politics, regulation.
This is what “gill achievements AI” bring up for me (don’t need full name)
“Gill helped secure high-impact international consensus recommendations on regulating Artificial Intelligence (Al) in lethal autonomous weapon systems in 2017 and 2018, the draft Al ethics recommendation of UNESCO in 2020, and a new international platform on digital health and Al.”
Edit: Check out the credentials of its members. I see a lot of competence there. Compare qith national committees. Steering this is a strategic achievement.
He is a political coordinator. I hope that you can understand that he has to discuss existing AI, not just future AI.
Think what kind of statements give you political leverage in his position. I could also ask how many policies Eliezer has successfully pushed through banning AI research or deployment, to make this point more clear.
In general, I stand by Stuart as the overall champ. Gill is last on alignment knowledge (still knowledgable on AI), high on strategy.
Back to topic:
All I am pointing out, is that you don’t need to throw in the word strategic anywhere when mentioning that Eliezer is an excellent x-risk analyst and advocate. I think this is important distinction, because we also need strategic AI safety champions and political regulation.
Note: Even people who don’t even believe in x-risk can have a huge impact, if they successfully regulate AI in key areas/regions, or internationally.
I am not really sure what all of the things you are saying here are supposed to tell me. Maybe I am supposed to respect random people in the UN? I do not generally think highly of the UN, or think involvement in it is much of a sign of being a good strategist (though of course, as all highly selected positions it is of course evidence of being in the top percentiles of competence, but not more than that).
I didn’t quote these sections because they too are largely uninformative:
Gill helped secure high-impact international consensus recommendations on regulating Artificial Intelligence (Al) in lethal autonomous weapon systems in 2017 and 2018, the draft Al ethics recommendation of UNESCO in 2020, and a new international platform on digital health and Al.
Like, what is this supposed to tell me? I really don’t know the sign of lethal autonomous weapon regulation. My guess is it’s mildly bad and I was historically opposed to regulating it, but it’s not super clear and I’ve flipped back and forth a few times. The “platform for digital health and AI” seems like a red flag, but I don’t know.
Is there someone you’d point to as being a better “strategic thinker on the topic of existential risk from AGI”, as is the topic of discussion in this thread?
Good question. ARE there any A-tier strategists at all on x-risk? I’d nominate Stuart Russel. Hm. Even Yoshua Bengio is arguably also having a larger impact than Eliezer in some critical areas (policy).
For pure strategic competence, Amandeep Singh Gill.
Jaan Tallin. Maybe even Xue Lan.
Russell, Bengio, and Tallinn are good but not in the same league as Yudkowsky in terms of strategic thinking about AGI X-derisking. A quick search of Gill doesn’t turn up anything about existential risk but I could very easily have missed it.
Okay, I think I see the confusion. Your phrasing make it seem (to me at least) like Eliezer has had the biggest strategic impact on mitigating x-risk, and arguably also being the most competent there. I would really not be sure of that. But if we talk about strategically dissecting x-risk, without necessarily mitigating it, directly or indirectly, then maybe Eliezer would win. Still would maybe lean towards Stuart.
Gill IS having an impact that de facto mitigates x-risk, whether he uses the term or not. But he is not making people talk about it (without necessarily doing anything about it) as much as Eliezer. In that sense one could argue he isn’t really an x-risk champ.
From Wikipedia:
What? I have never heard of this person, and the little I have read suggests he is deeply deeply confused about the nature of AGI. This doesn’t feel like a serious suggestion.
Why not? What does the quote have to do with anything?
If one of your central takeaways from AI is that it is “going to help accelerate the process of achieveing the UN’s Sustainable Development Goals” then you are deeply miscalibrated about the impact of AI.
It’s like saying that “the industrial revolution could help improve the efficiency of chariot production”. Bro, there is going to be no chariots after the industrial revolution. There are also going to be no more sustainable development goals post-ASI.
Like, it’s a random quote, maybe he had more context that makes it make more sense, but it’s the only object level take of his I could find on his Wikipedia page. If he has more relevant things to say, they didn’t make it into the things I could quickly find out about him, but my first skim of things strongly suggested someone who lacks situational awareness (in the https://situational-awareness.ai/ sense).
Habryka, I genuinely don’t know why that quote appeared for you first.
https://en.wikipedia.org/wiki/Amandeep_Singh_Gill
First sentence under work: “Gill has written about the impact of artificial intelligence (AI) on modern life and the necessity for establishing appropriate regulatory frameworks to ensure AI plays a positive role in the future.”
He is the UN envoy. He is in policy, politics, regulation.
This is what “gill achievements AI” bring up for me (don’t need full name)
“Gill helped secure high-impact international consensus recommendations on regulating Artificial Intelligence (Al) in lethal autonomous weapon systems in 2017 and 2018, the draft Al ethics recommendation of UNESCO in 2020, and a new international platform on digital health and Al.”
This is the AI advisory board: https://www.un.org/en/ai-advisory-body/members
Edit: Check out the credentials of its members. I see a lot of competence there. Compare qith national committees. Steering this is a strategic achievement.
He is a political coordinator. I hope that you can understand that he has to discuss existing AI, not just future AI.
Think what kind of statements give you political leverage in his position. I could also ask how many policies Eliezer has successfully pushed through banning AI research or deployment, to make this point more clear.
In general, I stand by Stuart as the overall champ. Gill is last on alignment knowledge (still knowledgable on AI), high on strategy.
Back to topic:
All I am pointing out, is that you don’t need to throw in the word strategic anywhere when mentioning that Eliezer is an excellent x-risk analyst and advocate. I think this is important distinction, because we also need strategic AI safety champions and political regulation.
Note: Even people who don’t even believe in x-risk can have a huge impact, if they successfully regulate AI in key areas/regions, or internationally.
I am not really sure what all of the things you are saying here are supposed to tell me. Maybe I am supposed to respect random people in the UN? I do not generally think highly of the UN, or think involvement in it is much of a sign of being a good strategist (though of course, as all highly selected positions it is of course evidence of being in the top percentiles of competence, but not more than that).
I didn’t quote these sections because they too are largely uninformative:
Like, what is this supposed to tell me? I really don’t know the sign of lethal autonomous weapon regulation. My guess is it’s mildly bad and I was historically opposed to regulating it, but it’s not super clear and I’ve flipped back and forth a few times. The “platform for digital health and AI” seems like a red flag, but I don’t know.