I am not sure you can legitimately characterise the efforts of an intelligent agent as being “random stumbling”.
Anyway, I was pointing out a flaw in the reasoning supporting a small probability of failure (under the described circumstances). Maybe some other argument supports a small probability of failure. However, the original argument would still be wrong.
Other approaches—including messy ones like neural networks—might result in a stable self-improving system with a desirable goal, apart from trying to develop a deterministic self-improving system that has a stable goal from the beginning.
A good job too. After all, those are our current circumstances. Complex messy systems like Google and hedge funds are growing towards machine intelligence—while trying to preserve what they value in the process.
I am not sure you can legitimately characterise the efforts of an intelligent agent as being “random stumbling”.
Anyway, I was pointing out a flaw in the reasoning supporting a small probability of failure (under the described circumstances). Maybe some other argument supports a small probability of failure. However, the original argument would still be wrong.
Other approaches—including messy ones like neural networks—might result in a stable self-improving system with a desirable goal, apart from trying to develop a deterministic self-improving system that has a stable goal from the beginning.
A good job too. After all, those are our current circumstances. Complex messy systems like Google and hedge funds are growing towards machine intelligence—while trying to preserve what they value in the process.