Even the 0.2 percent lower bound justifies the existence of the SIAI and therefore contributing to its cause.
This essay made me think about the problem in a different way. I herewith retract my previous estimations as vastly overconfident. The best argument to assign a probability with a lower bound of 0.1 (10%) to the possibility of rapid self-improvement resulting in superhuman intelligence is that given the general possibility of human-level AGI there are no good reasons right now to be more confident of the possible inability to dramatically improve the algorithms that are necessary to create an AGI in the first place.
The best argument to assign a probability with a lower bound of 0.1 (10%) to the possibility of rapid self-improvement resulting in superhuman intelligence is that given the general possibility of human-level AGI there are no good reasons right now to be more confident of the possible inability to dramatically improve the algorithms that are necessary to create an AGI in the first place.
I don’t write about it much—but the chance of intelligent machines catalysing the formation of more intelligent machines seems high to me − 99% maybe.
I don’t mean to say that they will necessarily rapidly shoot off to the physical limits (for one thing, we don’t really know how hard the problems involved in doing that are) - but it does look as though superintelligence will arrive not terribly long after machines learn to to program as well as humans can.
IMO, there’s quite a lot of evidence suggesting that this is likely to happen—though I don’t know if there’s a good summary of the material anywhere.
Chalmers had a crack at making the case in the first 20 pages of this. A lot of the evidence comes from the history of technological synergy—and the extent to which computers are used to make the next generation of machines today. In theory a sufficiently powerful world government could prevent it—but one of those seems unlikely soon—and I don’t really see why they would want to.
Even the 0.2 percent lower bound justifies the existence of the SIAI and therefore contributing to its cause.
This essay made me think about the problem in a different way. I herewith retract my previous estimations as vastly overconfident. The best argument to assign a probability with a lower bound of 0.1 (10%) to the possibility of rapid self-improvement resulting in superhuman intelligence is that given the general possibility of human-level AGI there are no good reasons right now to be more confident of the possible inability to dramatically improve the algorithms that are necessary to create an AGI in the first place.
I don’t write about it much—but the chance of intelligent machines catalysing the formation of more intelligent machines seems high to me − 99% maybe.
I don’t mean to say that they will necessarily rapidly shoot off to the physical limits (for one thing, we don’t really know how hard the problems involved in doing that are) - but it does look as though superintelligence will arrive not terribly long after machines learn to to program as well as humans can.
IMO, there’s quite a lot of evidence suggesting that this is likely to happen—though I don’t know if there’s a good summary of the material anywhere. Chalmers had a crack at making the case in the first 20 pages of this. A lot of the evidence comes from the history of technological synergy—and the extent to which computers are used to make the next generation of machines today. In theory a sufficiently powerful world government could prevent it—but one of those seems unlikely soon—and I don’t really see why they would want to.