The older conversations about Tool AI seemed to focus on the difference between an Oracle that answers questions and one that does things. I feel like this distinction is bigger and in a different way than it was made out to be, because doing things is really hard. If feels like the paradox of sensing being complicated should go two ways.
Checking the Wikipedia page for Moravec’s Paradox, “sensorimotor” is how they describe it, so both sensing and motor skills (inputs and outputs) are covered. My intuition fairly screams that this should generalize to any other environment-affecting action. So:
The more inputs an AI starts with, the easier it is to recognize other inputs/outputs.
The more outputs an AI starts with, the easier it is to add other inputs/outputs.
This still doesn’t identify what causes the machine to try to affect the world at all.
1. I am deeply confused by this.
The older conversations about Tool AI seemed to focus on the difference between an Oracle that answers questions and one that does things. I feel like this distinction is bigger and in a different way than it was made out to be, because doing things is really hard. If feels like the paradox of sensing being complicated should go two ways.
Checking the Wikipedia page for Moravec’s Paradox, “sensorimotor” is how they describe it, so both sensing and motor skills (inputs and outputs) are covered. My intuition fairly screams that this should generalize to any other environment-affecting action. So:
The more inputs an AI starts with, the easier it is to recognize other inputs/outputs.
The more outputs an AI starts with, the easier it is to add other inputs/outputs.
This still doesn’t identify what causes the machine to try to affect the world at all.