Well, there will always remain some logical uncertainty. Anyway, I remain relatively optimistic about the feasibility of a formal proof. It comes from a view of our current systems.
Currently, a desktop system (OS + Desktop UI + office suite + Mail + Web browser) is about two hundred million lines of code (whether it is Windows, GNU/Linux, or MacOSX). Some guys were able to build a prototype of similar functionality in about twenty thousand lines, which is 4 orders of magnitude smaller. (More details in their manifesto and their various progress reports).
There is no single development, in either technology or management technique, which by itself promises even one order-of-magnitude improvement within a decade in productivity, in reliability, in simplicity. Fred brooks.
Which is often understood as “We will never observe even a single order of magnitude improvement from new software making techniques”. Which I think is silly or misinformed. The above tells me that we have at least 3 orders of magnitude ahead of us. Maybe not enough to have a provable AI design, but still damn closer than current techniques.
My reasoning is a bit different. We know that current desktop systems are a mess that we cobbled together over time with outdated techniques. The vpri basically showed that if we knew how to do it from the beginning, it would have been about 1000 times easier.
I expect the same to be true for any complex system, including AGI. If we cobble it together over time with outdated techniques, it will likely be 1000 times more complex than it needs to be. My hope is that we can actually avoid those complexities altogether. So that’s not exactly an improvement, since there would be no crappy system to compare to.
As for the necessity of having a working system before we can improve it’s design… well it’s not always the case. I’m currently working on a metacompiler, and I’m refining its design right now, and I didn’t even bootstrapped it yet.
Well, there will always remain some logical uncertainty. Anyway, I remain relatively optimistic about the feasibility of a formal proof. It comes from a view of our current systems.
Currently, a desktop system (OS + Desktop UI + office suite + Mail + Web browser) is about two hundred million lines of code (whether it is Windows, GNU/Linux, or MacOSX). Some guys were able to build a prototype of similar functionality in about twenty thousand lines, which is 4 orders of magnitude smaller. (More details in their manifesto and their various progress reports).
Which is often understood as “We will never observe even a single order of magnitude improvement from new software making techniques”. Which I think is silly or misinformed. The above tells me that we have at least 3 orders of magnitude ahead of us. Maybe not enough to have a provable AI design, but still damn closer than current techniques.
We know a lot about what is needed in OS, Web software, etc., from experience.
Is it possible to go through 3 orders of magnitude of improvement in any system, such as a future AGI, without running a working system in between?
My reasoning is a bit different. We know that current desktop systems are a mess that we cobbled together over time with outdated techniques. The vpri basically showed that if we knew how to do it from the beginning, it would have been about 1000 times easier.
I expect the same to be true for any complex system, including AGI. If we cobble it together over time with outdated techniques, it will likely be 1000 times more complex than it needs to be. My hope is that we can actually avoid those complexities altogether. So that’s not exactly an improvement, since there would be no crappy system to compare to.
As for the necessity of having a working system before we can improve it’s design… well it’s not always the case. I’m currently working on a metacompiler, and I’m refining its design right now, and I didn’t even bootstrapped it yet.