This is somewhat implied in the article—part of the style described is doing a lot of upfront work on planning, rather than charging forward to code something fast.
However, I talked about this with Eric Raymond over the weekend, and he pointed out a problem which was not implied in the article and which I hadn’t thought of—what he calls corner cases. If I remember correctly, these are unexpected interactions between parts of the programs and/or what might happen when a program gets unexpected input. [1] Modularity helps, but not enough, and they increase rapidly as programs get more complex.
It seems to me that self-amplifying AI is an optimal way of creating more corner cases.
[1] Corner cases—wikipedia seems to give a slightly different definition. Either one is going to be a very complicated challenge.
This is somewhat implied in the article—part of the style described is doing a lot of upfront work on planning, rather than charging forward to code something fast.
However, I talked about this with Eric Raymond over the weekend, and he pointed out a problem which was not implied in the article and which I hadn’t thought of—what he calls corner cases. If I remember correctly, these are unexpected interactions between parts of the programs and/or what might happen when a program gets unexpected input. [1] Modularity helps, but not enough, and they increase rapidly as programs get more complex.
It seems to me that self-amplifying AI is an optimal way of creating more corner cases.
[1] Corner cases—wikipedia seems to give a slightly different definition. Either one is going to be a very complicated challenge.