To make a robot that asks “what is the meaning of life” I’d train a hierarchically-reasoning AI where the innermost loop of symbolic reasoning gets fed back as a sensory input into the sensory stream. Then I’d RL it.
The resulting robot would have impulses to seek out certain visual stimuli and might ask “Why is this landscape view [good] and why is this other one [bad]?” Because the RL process has given it both a sense of curiosity, and a preference which it doesn’t have full access to.
It would also have impulses to seek out certain inner-brain-loop states, which are at the highest layer of abstraction, and would ask “Why is this particular way of living my life [good] and the other [bad]?” For the same reasons as above.
I thik that that second kind of question is basically equivalent to humans asking what the meaning of life is.
To make a robot that asks “what is the meaning of life” I’d train a hierarchically-reasoning AI where the innermost loop of symbolic reasoning gets fed back as a sensory input into the sensory stream. Then I’d RL it.
The resulting robot would have impulses to seek out certain visual stimuli and might ask “Why is this landscape view [good] and why is this other one [bad]?” Because the RL process has given it both a sense of curiosity, and a preference which it doesn’t have full access to.
It would also have impulses to seek out certain inner-brain-loop states, which are at the highest layer of abstraction, and would ask “Why is this particular way of living my life [good] and the other [bad]?” For the same reasons as above.
I thik that that second kind of question is basically equivalent to humans asking what the meaning of life is.