My apologies for arriving late to the thread, since I got linked to this one from Reddit and don’t have a back-log file of LessWrong’s various AI threads.
We should be formulating this post as a question: does there exist a difference, in reality, between cognition and optimization? Certainly, as anyone acquainted with the maths will point out, you can, theoretically, construct an optimization metric characterizing any form of cognition (or at least, any computational cognition that an AI could carry out) you may care about. This does not mean, however, that the cognition itself requires or is equivalent to optimizing some universe for some utility metric. After all, we also know from basic programming-language theory that any computation whatsoever can be encoded (albeit possibly at consider effort) as a pure, referentially transparent function from inputs to outputs!
What we do know is that optimization is the “cheapest and easiest” way of mathematically characterizing any and all possible forms of computational cognition. That just doesn’t tell us anything at all about any particular calculation or train of thought we may wish to carry out, or any software we might wish to build.
For instance, we can say that AIXI or Goedel Machines, if they were physically realizable (they’re not), would possess “general intelligence” because they are parameterized over an arbitrary optimization metric (or, in AIXI’s case, a reward channel representing samples from an arbitrary optimization metric). This means they must be capable of carrying out any computable cognition, since we could input any possible cognition as a suitably encoded utility function. Of course, this very parameterization, this very generality, is what makes it so hard to encode the specific things we might want an agent to do: neither AIXI nor Goedel Machines even possess an accessible internal ontology we could use to describe things we care about!
Which brings us to the question: can we mathematically characterize an “agent” that can carry out any computable cognition, but does not actively optimize at all, instead simply computing “output thoughts” from “input thoughts” and writing them to some output tape?
My apologies for arriving late to the thread, since I got linked to this one from Reddit and don’t have a back-log file of LessWrong’s various AI threads.
We should be formulating this post as a question: does there exist a difference, in reality, between cognition and optimization? Certainly, as anyone acquainted with the maths will point out, you can, theoretically, construct an optimization metric characterizing any form of cognition (or at least, any computational cognition that an AI could carry out) you may care about. This does not mean, however, that the cognition itself requires or is equivalent to optimizing some universe for some utility metric. After all, we also know from basic programming-language theory that any computation whatsoever can be encoded (albeit possibly at consider effort) as a pure, referentially transparent function from inputs to outputs!
What we do know is that optimization is the “cheapest and easiest” way of mathematically characterizing any and all possible forms of computational cognition. That just doesn’t tell us anything at all about any particular calculation or train of thought we may wish to carry out, or any software we might wish to build.
For instance, we can say that AIXI or Goedel Machines, if they were physically realizable (they’re not), would possess “general intelligence” because they are parameterized over an arbitrary optimization metric (or, in AIXI’s case, a reward channel representing samples from an arbitrary optimization metric). This means they must be capable of carrying out any computable cognition, since we could input any possible cognition as a suitably encoded utility function. Of course, this very parameterization, this very generality, is what makes it so hard to encode the specific things we might want an agent to do: neither AIXI nor Goedel Machines even possess an accessible internal ontology we could use to describe things we care about!
Which brings us to the question: can we mathematically characterize an “agent” that can carry out any computable cognition, but does not actively optimize at all, instead simply computing “output thoughts” from “input thoughts” and writing them to some output tape?