I had been thinking about submitting something to this. The problem I’m having right now is that I’m thinking of too many things I’d hope to see covered in such a volume, including:
The three main schools of thought regarding the Singularity. (I’d actually argue at this point that the Kurzweilian “singularity” is just a different thing than the “singularity” discussed by the event horizon and intelligence explosion schools of thought, rather than being a different approach to describing and understanding the same thing. The event horizon and intelligence explosion schools start with the same basic definition — the technological creation of smarter-than-human intelligences — and come to different answers about the question “What happens then?”, while Kurzweil defines the “Singularity” as “technological change so rapid and so profound that it represents a rupture in the fabric of human history”. It seems to me that, although they are somewhat nearby in memespace, they should be regarded as claims about distinct concepts, rather than distinct schools of thought regarding a single concept.)
The case for intelligence explosion and why it may be fast and local.
Following the previous two: why the structure and goal system of the first sufficiently powerful general intelligence may completely determine what the future looks like.
The complexity and fragility of human value; why the large majority of possible AI designs will be (or will end up self-modifying to be) completely non-anthropomorphic utility maximizers.
Following the previous four: the need for (and difficulty of) Friendly AI.
That would be a lot to fit into 15 pages, and I feel like I’d mostly be citing Yudkowsky, E. S., Omohundro, S., etc. as sources… but I don’t know, maybe it would be a good thing to have a general introduction to the SIAI perspective, referring interested readers to deeper explanations.
I had been thinking about submitting something to this. The problem I’m having right now is that I’m thinking of too many things I’d hope to see covered in such a volume, including:
The three main schools of thought regarding the Singularity. (I’d actually argue at this point that the Kurzweilian “singularity” is just a different thing than the “singularity” discussed by the event horizon and intelligence explosion schools of thought, rather than being a different approach to describing and understanding the same thing. The event horizon and intelligence explosion schools start with the same basic definition — the technological creation of smarter-than-human intelligences — and come to different answers about the question “What happens then?”, while Kurzweil defines the “Singularity” as “technological change so rapid and so profound that it represents a rupture in the fabric of human history”. It seems to me that, although they are somewhat nearby in memespace, they should be regarded as claims about distinct concepts, rather than distinct schools of thought regarding a single concept.)
The case for intelligence explosion and why it may be fast and local.
The AI drives.
Following the previous two: why the structure and goal system of the first sufficiently powerful general intelligence may completely determine what the future looks like.
The complexity and fragility of human value; why the large majority of possible AI designs will be (or will end up self-modifying to be) completely non-anthropomorphic utility maximizers.
Following the previous four: the need for (and difficulty of) Friendly AI.
That would be a lot to fit into 15 pages, and I feel like I’d mostly be citing Yudkowsky, E. S., Omohundro, S., etc. as sources… but I don’t know, maybe it would be a good thing to have a general introduction to the SIAI perspective, referring interested readers to deeper explanations.