MIRI wants to learn more about whether its basic assumptions and their apparent strategic implications are sound.
I still think it would be valuable to hear what relevant, independent AI experts think about these basic assumptions and strategic implications, perhaps accompanied with a detailed theory as to why they’ve come to wrong answers and MIRI has more advanced insight.
Well, we can do this for lots of specific cases. E.g. last time I spoke to Peter Norvig, he said his reason for not thinking much about AI risk at this point (despite including a discussion of it in his AI textbook) was that he’s fairly confident AI is hundreds of years away. Unfortunately, I didn’t have time to walk him through the points of When Will AI Be Created? to see exactly why we disagreed on this point.
This will all be a lot easier when Bostrom’s Superintelligence book comes out next year, so that experts can reply to the basic theses of our view when they are organized neatly in one place and explained in some detail with proper references and so on.
I still think it would be valuable to hear what relevant, independent AI experts think about these basic assumptions and strategic implications, perhaps accompanied with a detailed theory as to why they’ve come to wrong answers and MIRI has more advanced insight.
Well, we can do this for lots of specific cases. E.g. last time I spoke to Peter Norvig, he said his reason for not thinking much about AI risk at this point (despite including a discussion of it in his AI textbook) was that he’s fairly confident AI is hundreds of years away. Unfortunately, I didn’t have time to walk him through the points of When Will AI Be Created? to see exactly why we disagreed on this point.
This will all be a lot easier when Bostrom’s Superintelligence book comes out next year, so that experts can reply to the basic theses of our view when they are organized neatly in one place and explained in some detail with proper references and so on.