But of course the argument is a little large to entirely set out in one paper; the next nearest thing is What I Think, If Not Why and the title shows in what way that’s not what Goertzel was looking for.
Artificial Intelligence as a Positive and Negative Factor in Global Risk
44 pages. I don’t see anything much like the argument being asked for. The lack of an index doesn’t help. The nearest thing I could find was this:
It may be tempting to ignore Artificial Intelligence because, of all the global risks discussed in this book, AI is hardest to discuss. We cannot consult actuarial statistics to assign small annual probabilities of catastrophe, as with asteroid strikes. We cannot use calculations from a precise, precisely confirmed model to rule out events or place infinitesimal upper bounds on their probability, as with proposed physics disasters. But this makes AI catastrophes more worrisome, not less.
He also claims that intelligence could increase rapidly with a “dominant” probabilty.
I cannot perform a precise calculation using a precisely confirmed theory, but my current opinion is that sharp jumps in intelligence are possible, likely, and constitute the dominant probability.
The nearest thing to such a link is Artificial Intelligence as a Positive and Negative Factor in Global Risk [PDF].
But of course the argument is a little large to entirely set out in one paper; the next nearest thing is What I Think, If Not Why and the title shows in what way that’s not what Goertzel was looking for.
44 pages. I don’t see anything much like the argument being asked for. The lack of an index doesn’t help. The nearest thing I could find was this:
He also claims that intelligence could increase rapidly with a “dominant” probabilty.
This all seems pretty vague to me.