Here are a couple of other proposals (which I haven’t thought about very long) for consideration:
Have the AI create an internal object structure of all the concepts in the world, trying as best as it can to carve reality at its joints. Let the AI’s programmers inspect this object structure, make modifications to it, then formulate a command for the AI in terms of the concepts it has discovered for itself.
Instead of developing a foolproof way for the AI to understand meaning, develop an OK way for the AI to understand meaning and pair it with a really good system for keeping a distribution over different meanings and asking clarifying questions.
That first one would be worth doing even if we didn’t dare hand the AI the keys to go and make changes. To study a non-human-created ontology would be fascinating and maybe really useful.
Here are a couple of other proposals (which I haven’t thought about very long) for consideration:
Have the AI create an internal object structure of all the concepts in the world, trying as best as it can to carve reality at its joints. Let the AI’s programmers inspect this object structure, make modifications to it, then formulate a command for the AI in terms of the concepts it has discovered for itself.
Instead of developing a foolproof way for the AI to understand meaning, develop an OK way for the AI to understand meaning and pair it with a really good system for keeping a distribution over different meanings and asking clarifying questions.
That first one would be worth doing even if we didn’t dare hand the AI the keys to go and make changes. To study a non-human-created ontology would be fascinating and maybe really useful.