SIAI’s leaders and community members have a lot of beliefs and opinions, many of which I share and many not, but the key difference between our perspectives lies in what I’ll call SIAI’s “Scary Idea”, which is the idea that: progressing toward advanced AGI without a design for “provably non-dangerous AGI” (or something closely analogous, often called “Friendly AI” in SIAI lingo) is highly likely to lead to an involuntary end for the human race.
Of course it’s rarely clarified what “provably” really means. A mathematical proof can only be applied to the real world in the context of some assumptions, so maybe “provably non-dangerous AGI” means “an AGI whose safety is implied by mathematical arguments together with assumptions that are believed reasonable by some responsible party”? (where the responsible party is perhaps “the overwhelming majority of scientists” … or SIAI itself?).
Please note that, although I don’t agree with the Scary Idea, I do agree that the development of advanced AGI has significant risks associated with it.
Ben Goertzel: The Singularity Institute’s Scary Idea (and Why I Don’t Buy It) (Thanks Kevin)
Have turned this into a top-level article—many thanks for the pointer!