Nice well-written post.
You definitely show the possibility that AI risk is unlikely, because recursive self-improvement could be a conjunctive scenario. But without a better sketch of what conjunctions are required for recursive self-improvement (or AGI), you’ve only succeeded in keeping the possibility open without actually arguing for a lack of risk. I think you’ve created a great starting point Hypothetical Apostasy for those here that believe strongly in the SIAI. Ultimately though, a healthy discussion about any actual conjunctions involved is what it now takes to decide whether there are risks from AI.
My (10 minutes attempted) challenge to whether there exists a conjunction:
Self-improvement is a useful instrumental goal for most imaginable systems with goals.
Recursive improvement is implied by the huge room for improvement of… pretty much anything, but specifically, systems with goals. (EDIT: XiXiDu’s next post addresses and disagrees with this)
AI programmers are creating systems with goals.
One might some day be powerful/intelligent enough to realize many of its instrumental goals.
That seems to be all it takes. Are there other relevant factors I’m forgetting? I’d say the first 3 have a probability of .98+. The 4th is what SIAI is trying to deal with.
Nice well-written post. You definitely show the possibility that AI risk is unlikely, because recursive self-improvement could be a conjunctive scenario. But without a better sketch of what conjunctions are required for recursive self-improvement (or AGI), you’ve only succeeded in keeping the possibility open without actually arguing for a lack of risk. I think you’ve created a great starting point Hypothetical Apostasy for those here that believe strongly in the SIAI. Ultimately though, a healthy discussion about any actual conjunctions involved is what it now takes to decide whether there are risks from AI.
My (10 minutes attempted) challenge to whether there exists a conjunction:
Self-improvement is a useful instrumental goal for most imaginable systems with goals.
Recursive improvement is implied by the huge room for improvement of… pretty much anything, but specifically, systems with goals. (EDIT: XiXiDu’s next post addresses and disagrees with this)
AI programmers are creating systems with goals.
One might some day be powerful/intelligent enough to realize many of its instrumental goals.
That seems to be all it takes. Are there other relevant factors I’m forgetting? I’d say the first 3 have a probability of .98+. The 4th is what SIAI is trying to deal with.