So where can I find a concise, exhaustive list of all sound arguments pro and con AGI being likely an existential risk?
Nick Bostrom’s book ‘Superintelligence’ is the standard reference here. I also find the AI FOOM Debate especially enlightening, which hits a lot of the same points. Both you can find easily using google.
But I suppose that most people talking about AGI risk have not enough knowledge about what technically constitute an AGI.
I agree most people who talk about it are not experts in mathematics, computer science, or the field of ML, but the smaller set of people that I trust often are, such as researchers at UC Berkeley (Stuart Russell, Andrew Critch, many more), OpenAI (Paul Christiano, Chris Olah, many more), DeepMind (Jan Leike, Vika Krakovna, many more), MIRI, FHI, and so on. And of course just being an expert in a related technical domain does not make you an expert in long-term forecasting or even AGI, of which there are plausibly zero people with deep understanding.
And in this community Eliezer has talked often about actually solving the hard problem of AGI, not bouncing off and solving something easier and nearby, in part here but also in other places I’m having a hard time linking right now.
Bostrom’s book is a bit out of date, and perhaps isn’t the best reference on the AI safety community’s current concerns. Here are some more recent articles:
Nick Bostrom’s book ‘Superintelligence’ is the standard reference here. I also find the AI FOOM Debate especially enlightening, which hits a lot of the same points. Both you can find easily using google.
I agree most people who talk about it are not experts in mathematics, computer science, or the field of ML, but the smaller set of people that I trust often are, such as researchers at UC Berkeley (Stuart Russell, Andrew Critch, many more), OpenAI (Paul Christiano, Chris Olah, many more), DeepMind (Jan Leike, Vika Krakovna, many more), MIRI, FHI, and so on. And of course just being an expert in a related technical domain does not make you an expert in long-term forecasting or even AGI, of which there are plausibly zero people with deep understanding.
And in this community Eliezer has talked often about actually solving the hard problem of AGI, not bouncing off and solving something easier and nearby, in part here but also in other places I’m having a hard time linking right now.
Bostrom’s book is a bit out of date, and perhaps isn’t the best reference on the AI safety community’s current concerns. Here are some more recent articles:
Disentangling arguments for the importance of AI safety
A shift in arguments for AI risk
The Main Sources of AI Risk?
Thanks. I’ll further add Paul’s post What Failure Looks Like, and say that the Alignment Forum sequences raise a lot more specific technical concerns.