If you have a human-level intelligence which can read super-fast, and you set it free on the internet, it will learn a lot very quickly. (p71)
But why would you have a human-level intelligence which could read super-fast, which hadn’t already read most of the internet in the process of becoming an incrementally better stupid intelligence, learning how to read?
Similarly, if your new human-level AI project used very little hardware, then you could buy heaps more cheaply. But it seems somewhat surprising if you weren’t already using a lot of hardware, if it is cheap and helpful, and can replace good software to some extent.
I think there was a third example along similar lines, but I forget it.
In general, these sources of low recalcitrance would be huge if you imagine AI appearing fully formed at human-level without having exploited any of them already. But it seems to me that probably getting to human-level intelligence will involve exploiting any source of improvement we get our hands on. I’d be surprised if these ones, which don’t seem to require human-level intelligence to exploit, are still sitting untouched.
It may also be worth noting that there’s no particular reason to expect a full blown AI that wants to do real world things to be also the first good algorithmic optimizer (or hardware optimizer), for example. The first good algorithmic optimizer can be run on it’s own source, performing an entirely abstract task, without having to do the calculations relating to it’s hardware basis, the real world, and so on, which are an enormous extra hurdle.
It seems to me that the issue is that the only way some people can imagine this ‘explosion’ to happen is by imagining fairly anthropomorphic software which performs a task monstrously more complicated than mere algorithmic optimization “explosion” (in the sense that algorithms are replaced with their theoretically ideal counterparts, or something close. For every task there’s an optimum algorithm for doing it, and you can’t do better than this algorithm).
Super-fast reading aquires crystalline intelligence in a theoretical domain. Any educator knows that the real learning effect comes from practical experience including set-backs and reflection about upcoming problems using the theoretical knowledge.
If a sub HLMI assists in designing an improved hardware design the resulting new AI is not fully capable in an instant. Humans need 16 years to develop their full fluid intelligence. To build up crystalline intelligence to become head hardware architect needs 20 more years. A genius like Wozniak reached this level in a world of low IT complexity at the age of 24. For today’s complexity this would not suffice.
I don’t think that the point Bostrom is making hangs on this timeline of updates; the point is simply that, if you take an AGI to human level through purely improvement to qualitative intelligence, it will be super intelligent immediately. This point is important regardless of timeline; if you have an AGI that is low on quality intelligence but has these other resources, it may work to improve its quality intelligence. At the point that it’s quality is equivalent to a human, it will be beyond a human in ability and competence.
Perhaps this is all an intuition pump to appreciate the implications of a general intelligence on a machine.
So basically you’re arguing there shouldn’t be a resource overhang, because those resources should have already been applied while the AI was at a sub-human level?
I suppose one argument would be that there is a discrete jump in your ability to use those resources. Perhaps sub-human intelligences just can’t read at all. Maybe the correct algorithm is so conceptually separate from the “lets throw lots of machine learning and hardware at it” approach that it doesn’t work at all until it suddenly done. However, this argument simply pushes back the rhetorical buck—now we need to explain this discontinuity, and can’t rely on the resource overhang.
Another argument would be that your human-level intelligence makes available to you much more resources than before, because it can earn money / steal them for you. However, this only seems applicable to a ‘9 men in a basement’ type project, rather than a government funded Manhattan project.
If you have a human-level intelligence which can read super-fast, and you set it free on the internet, it will learn a lot very quickly. (p71)
But why would you have a human-level intelligence which could read super-fast, which hadn’t already read most of the internet in the process of becoming an incrementally better stupid intelligence, learning how to read?
Similarly, if your new human-level AI project used very little hardware, then you could buy heaps more cheaply. But it seems somewhat surprising if you weren’t already using a lot of hardware, if it is cheap and helpful, and can replace good software to some extent.
I think there was a third example along similar lines, but I forget it.
In general, these sources of low recalcitrance would be huge if you imagine AI appearing fully formed at human-level without having exploited any of them already. But it seems to me that probably getting to human-level intelligence will involve exploiting any source of improvement we get our hands on. I’d be surprised if these ones, which don’t seem to require human-level intelligence to exploit, are still sitting untouched.
It may also be worth noting that there’s no particular reason to expect a full blown AI that wants to do real world things to be also the first good algorithmic optimizer (or hardware optimizer), for example. The first good algorithmic optimizer can be run on it’s own source, performing an entirely abstract task, without having to do the calculations relating to it’s hardware basis, the real world, and so on, which are an enormous extra hurdle.
It seems to me that the issue is that the only way some people can imagine this ‘explosion’ to happen is by imagining fairly anthropomorphic software which performs a task monstrously more complicated than mere algorithmic optimization “explosion” (in the sense that algorithms are replaced with their theoretically ideal counterparts, or something close. For every task there’s an optimum algorithm for doing it, and you can’t do better than this algorithm).
Super-fast reading aquires crystalline intelligence in a theoretical domain. Any educator knows that the real learning effect comes from practical experience including set-backs and reflection about upcoming problems using the theoretical knowledge.
If a sub HLMI assists in designing an improved hardware design the resulting new AI is not fully capable in an instant. Humans need 16 years to develop their full fluid intelligence. To build up crystalline intelligence to become head hardware architect needs 20 more years. A genius like Wozniak reached this level in a world of low IT complexity at the age of 24. For today’s complexity this would not suffice.
I don’t think that the point Bostrom is making hangs on this timeline of updates; the point is simply that, if you take an AGI to human level through purely improvement to qualitative intelligence, it will be super intelligent immediately. This point is important regardless of timeline; if you have an AGI that is low on quality intelligence but has these other resources, it may work to improve its quality intelligence. At the point that it’s quality is equivalent to a human, it will be beyond a human in ability and competence.
Perhaps this is all an intuition pump to appreciate the implications of a general intelligence on a machine.
So basically you’re arguing there shouldn’t be a resource overhang, because those resources should have already been applied while the AI was at a sub-human level?
I suppose one argument would be that there is a discrete jump in your ability to use those resources. Perhaps sub-human intelligences just can’t read at all. Maybe the correct algorithm is so conceptually separate from the “lets throw lots of machine learning and hardware at it” approach that it doesn’t work at all until it suddenly done. However, this argument simply pushes back the rhetorical buck—now we need to explain this discontinuity, and can’t rely on the resource overhang.
Another argument would be that your human-level intelligence makes available to you much more resources than before, because it can earn money / steal them for you. However, this only seems applicable to a ‘9 men in a basement’ type project, rather than a government funded Manhattan project.