Minor note- they also discuss the idea that the human brain uses exotic computation (they correctly don’t spend much time on this objection).
They don’t spend enough time addressing software complexity issues at all. In particular, if the complexity hierarchy strongly fails to collapse (that is, P, NP, co-NP, EXP, PSPACE are all distinct) and hardware design requires difficult computation (this last seems plausible since graph coloring, an NP-complete problem, shows up in memory optimization, while the traveling salesman which is also NP-complete shows up in circuit design) then improvements in hardware will likely result in diminishing marginal returns at making new hardware. Earlier discussion I’ve had here with cousin_it (e.g. see this) make me less inclined to think that this is a bad a barrier as I thought earlier, but it seems clear that simply saying that it will be handled by the hardware improvements (which is mainly what they do in this article) seems insufficient.
I thought earlier, but it seems clear that simply saying that it will be handled by the hardware improvements (which is mainly what they do in this article) seems insufficient.
Their presumption seems to be an algorithmic human-level artificial general intelligence that is capable of running on a digital computer without diminishing returns and can handle the complexity of its design parameters. You can’t argue with that because they just assume all necessary presuppositions.
What is still questionable in my opinion is that any level of intelligence would be capable of explosive recursive self-improvement, since it has to use its own intelligence to become more intelligent, which is by definition the same problem we currently face in inventing superhuman intelligence. Sure, clock-speed is really the killer argument here, but to increase clock-speed it has to use its current intelligence only, just as we humans have to use our intelligence to increase clock-speeds and we don’t call that an explosion. Why are they so sure that increasing the amount of available subjective-time significantly accelerates the discovery rate? They mention micro-experiments and nanotechnology, but it would have to invent that as well without micro-experiments and nanotechnology to do so. Just thinking about it might help to read the available literature faster but not come up with new data, which needs real-world feedback and a huge amount of dumb luck. Humans can do all this as well and use the same technology in combination with expert systems, which diminishes the relative acceleration by AGI again.
Minor note- they also discuss the idea that the human brain uses exotic computation (they correctly don’t spend much time on this objection).
They don’t spend enough time addressing software complexity issues at all. In particular, if the complexity hierarchy strongly fails to collapse (that is, P, NP, co-NP, EXP, PSPACE are all distinct) and hardware design requires difficult computation (this last seems plausible since graph coloring, an NP-complete problem, shows up in memory optimization, while the traveling salesman which is also NP-complete shows up in circuit design) then improvements in hardware will likely result in diminishing marginal returns at making new hardware. Earlier discussion I’ve had here with cousin_it (e.g. see this) make me less inclined to think that this is a bad a barrier as I thought earlier, but it seems clear that simply saying that it will be handled by the hardware improvements (which is mainly what they do in this article) seems insufficient.
Their presumption seems to be an algorithmic human-level artificial general intelligence that is capable of running on a digital computer without diminishing returns and can handle the complexity of its design parameters. You can’t argue with that because they just assume all necessary presuppositions.
What is still questionable in my opinion is that any level of intelligence would be capable of explosive recursive self-improvement, since it has to use its own intelligence to become more intelligent, which is by definition the same problem we currently face in inventing superhuman intelligence. Sure, clock-speed is really the killer argument here, but to increase clock-speed it has to use its current intelligence only, just as we humans have to use our intelligence to increase clock-speeds and we don’t call that an explosion. Why are they so sure that increasing the amount of available subjective-time significantly accelerates the discovery rate? They mention micro-experiments and nanotechnology, but it would have to invent that as well without micro-experiments and nanotechnology to do so. Just thinking about it might help to read the available literature faster but not come up with new data, which needs real-world feedback and a huge amount of dumb luck. Humans can do all this as well and use the same technology in combination with expert systems, which diminishes the relative acceleration by AGI again.