I’m not sure that your definition of a Singularity is a good one. By that definition you are only asking about a subclass of best case Singularity scenarios. An extremely self-improving intelligence that doesn’t help humans and takes over should probably be consdired a Singularity type event. In fact by your definition it would constitute a Singularity if we created an entity about as smart as a cat that was able to self-improve to being as intelligent as a raven. This seems to not fit what you want to ask.
I will therefore answer two questions:
First, will a singularity occur under your definition? I don’t know, I wouldn’t be surprised. One serious problem with this sort of thing is what one means by self-improving. For example, neural nets are self-improving as are a number of automated learning systems, and much of what they do is in some sense recursive. Presuming therefore you mean some form of recursive self-improvement that allows much more fundamental changes to the architecture of the entity in question, I assign this sort of event a decent chance of happening in my lifetime (say 10-20%).
Now, I will answer what I think you wanted to ask, where by Singularity you mean the creation of a recursively self-improving AI which improves itself so quickly and so much that it quickly becomes a dominant force in its lightcone. This possibility I assign a very low probability of happening my lifetime, around 1-2%. Most of that uncertainty is due to uncertainty about how much purely algorithmic improvement is possible (e.g. issues like whether P=NP and the relationship between NP and BPP).
I’m not sure that your definition of a Singularity is a good one. By that definition you are only asking about a subclass of best case Singularity scenarios. An extremely self-improving intelligence that doesn’t help humans and takes over should probably be consdired a Singularity type event. In fact by your definition it would constitute a Singularity if we created an entity about as smart as a cat that was able to self-improve to being as intelligent as a raven. This seems to not fit what you want to ask.
I will therefore answer two questions: First, will a singularity occur under your definition? I don’t know, I wouldn’t be surprised. One serious problem with this sort of thing is what one means by self-improving. For example, neural nets are self-improving as are a number of automated learning systems, and much of what they do is in some sense recursive. Presuming therefore you mean some form of recursive self-improvement that allows much more fundamental changes to the architecture of the entity in question, I assign this sort of event a decent chance of happening in my lifetime (say 10-20%).
Now, I will answer what I think you wanted to ask, where by Singularity you mean the creation of a recursively self-improving AI which improves itself so quickly and so much that it quickly becomes a dominant force in its lightcone. This possibility I assign a very low probability of happening my lifetime, around 1-2%. Most of that uncertainty is due to uncertainty about how much purely algorithmic improvement is possible (e.g. issues like whether P=NP and the relationship between NP and BPP).