That doesn’t feel sufficient to explain to me “why do humans ask questions like ‘what is the meaning of life’?”
What makes a world state meaningful? If you were building the minimum viable robot that might experience meaning or asked “what is the meaning of life?”, what is the smallest bit you could take away such that it no longer ask that question?
To make a robot that asks “what is the meaning of life” I’d train a hierarchically-reasoning AI where the innermost loop of symbolic reasoning gets fed back as a sensory input into the sensory stream. Then I’d RL it.
The resulting robot would have impulses to seek out certain visual stimuli and might ask “Why is this landscape view [good] and why is this other one [bad]?” Because the RL process has given it both a sense of curiosity, and a preference which it doesn’t have full access to.
It would also have impulses to seek out certain inner-brain-loop states, which are at the highest layer of abstraction, and would ask “Why is this particular way of living my life [good] and the other [bad]?” For the same reasons as above.
I thik that that second kind of question is basically equivalent to humans asking what the meaning of life is.
That doesn’t feel sufficient to explain to me “why do humans ask questions like ‘what is the meaning of life’?”
What makes a world state meaningful?
Have you read Steve Byrnes’s [Valence series] 2. Valence & Normativity? I think it’s a remarkably clearly-written (and correct!) illustration of how people relate to what’s meaningful/desirable/positive-valence.[1]
That doesn’t feel sufficient to explain to me “why do humans ask questions like ‘what is the meaning of life’?”
What makes a world state meaningful? If you were building the minimum viable robot that might experience meaning or asked “what is the meaning of life?”, what is the smallest bit you could take away such that it no longer ask that question?
To make a robot that asks “what is the meaning of life” I’d train a hierarchically-reasoning AI where the innermost loop of symbolic reasoning gets fed back as a sensory input into the sensory stream. Then I’d RL it.
The resulting robot would have impulses to seek out certain visual stimuli and might ask “Why is this landscape view [good] and why is this other one [bad]?” Because the RL process has given it both a sense of curiosity, and a preference which it doesn’t have full access to.
It would also have impulses to seek out certain inner-brain-loop states, which are at the highest layer of abstraction, and would ask “Why is this particular way of living my life [good] and the other [bad]?” For the same reasons as above.
I thik that that second kind of question is basically equivalent to humans asking what the meaning of life is.
Have you read Steve Byrnes’s [Valence series] 2. Valence & Normativity? I think it’s a remarkably clearly-written (and correct!) illustration of how people relate to what’s meaningful/desirable/positive-valence.[1]
From a perspective similar to (but much more detailed and gears-level than) what J Bostock was talking about.
Not in much detail, thanks!