Personal opinion: OpenCog is attempting to get as general as it can within the logic-and-discrete-maths framework of Narrow AI. They are going to hit a wall as they try to connect their current video-game like environment to the real world, and find that they failed to integrate probabilistic approaches reasonably well. Also, without probabilistic approaches, you can’t get around Rice’s Theorem to build a self-improving agent.
Wellll.… the agent could make “narrow” self-improvements. It could build a formal specification for a few of its component parts and then perform the equivalent of provable compiler optimizations. But it would have a very hard time strengthening its core logic, as Rice’s Theorem would interfere: proving that certain improvements are improvements (or, even, that the optimized program performs the same task as the original source code) would be impossible.
But it would have a very hard time strengthening its core logic, as Rice’s Theorem would interfere: proving that certain improvements are improvements (or, even, that the optimized program performs the same task as the original source code) would be impossible.
This seems like the wrong conclusion to draw. Rice’s theorem (and other undecidability results) imply that there exist optimizations that are safe but cannot be proven to be safe. It doesn’t follow that most optimizations are hard to prove. One imagines that software could do what humans do—hunt around in the space of optimizations until one looks plausible, try to find a proof, and then if it takes too long, try another. This won’t necessarily enumerate the set of provable optimizations (much less the set of all enumerations), but it will produce some.
One imagines that software could do what humans do—hunt around in the space of optimizations until one looks plausible, try to find a proof, and then if it takes too long, try another. This won’t necessarily enumerate the set of provable optimizations (much less the set of all enumerations), but it will produce some.
To do that it’s going to need a decent sense of probability and expected utility. Problem is, OpenCog (and SOAR, too, when I saw it) is still based in a fundamentally certainty-based way of looking at AI tasks, rather than one focused on probability and optimization.
Problem is, OpenCog (and SOAR, too, when I saw it) is still based in a fundamentally certainty-based way of looking at AI tasks, rather than one focused on probability and optimization.
Uh, what were you looking at? The basic foundation of OpenCog is a probabilistic logic called PLN (the wrong one to be using, IMHO, but a probabilistic logic nonetheless). Everything in OpenCog is expressed and reasoned about in probabilities.
To do that it’s going to need a decent sense of probability and expected utility. Problem is, OpenCog (and SOAR, too, when I saw it) is still based in a fundamentally certainty-based way of looking at AI tasks, rather than one focused on probability and optimization.
I don’t see why this follows. It might be that mildly smart random search, plus a theorem prover with a fixed timeout, plus a benchmark, delivers a steady stream of useful optimizations. The probabilistic reasoning and utility calculation might be implicit in the design of the “self-improvement-finding submodule”, rather than an explicit part of the overall architecture. I don’t claim this is particularly likely, but neither does undecidability seem like the fundamental limitation here.
Personal opinion: OpenCog is attempting to get as general as it can within the logic-and-discrete-maths framework of Narrow AI. They are going to hit a wall as they try to connect their current video-game like environment to the real world, and find that they failed to integrate probabilistic approaches reasonably well. Also, without probabilistic approaches, you can’t get around Rice’s Theorem to build a self-improving agent.
Wellll.… the agent could make “narrow” self-improvements. It could build a formal specification for a few of its component parts and then perform the equivalent of provable compiler optimizations. But it would have a very hard time strengthening its core logic, as Rice’s Theorem would interfere: proving that certain improvements are improvements (or, even, that the optimized program performs the same task as the original source code) would be impossible.
This seems like the wrong conclusion to draw. Rice’s theorem (and other undecidability results) imply that there exist optimizations that are safe but cannot be proven to be safe. It doesn’t follow that most optimizations are hard to prove. One imagines that software could do what humans do—hunt around in the space of optimizations until one looks plausible, try to find a proof, and then if it takes too long, try another. This won’t necessarily enumerate the set of provable optimizations (much less the set of all enumerations), but it will produce some.
To do that it’s going to need a decent sense of probability and expected utility. Problem is, OpenCog (and SOAR, too, when I saw it) is still based in a fundamentally certainty-based way of looking at AI tasks, rather than one focused on probability and optimization.
Uh, what were you looking at? The basic foundation of OpenCog is a probabilistic logic called PLN (the wrong one to be using, IMHO, but a probabilistic logic nonetheless). Everything in OpenCog is expressed and reasoned about in probabilities.
Aaaaand now I have to go look at OpenCog again.
I don’t see why this follows. It might be that mildly smart random search, plus a theorem prover with a fixed timeout, plus a benchmark, delivers a steady stream of useful optimizations. The probabilistic reasoning and utility calculation might be implicit in the design of the “self-improvement-finding submodule”, rather than an explicit part of the overall architecture. I don’t claim this is particularly likely, but neither does undecidability seem like the fundamental limitation here.