Later I will say more upon this subject, but I can go ahead and tell you one of the guiding principles: If you meet someone who says that their AI will do XYZ just like humans, do not give them any venture capital. Say to them rather: “I’m sorry, I’ve never seen a human brain, or any other intelligence, and I have no reason as yet to believe that any such thing can exist. Now please explain to me what your AI does, and why you believe it will do it, without pointing to humans as an example.” Planes would fly just as well, given a fixed design, if birds had never existed; they are not kept aloft by analogies.
It seems like the more productive thing for a venture capitalist to say isn’t “planes are not kept aloft by analogies” but just “Be Specific.” If you’re using humans as shorthand for a black box, that’s unproductive. If you’re saying “A victory condition is that my chat program passes for human about as often as the average human in the following versions of the Turing Test” you have set up a specific goal to shoot for.
There’s still plenty of room to argue about how you’re planning to train your chatbot, or whether that project is particularly useful for developing AI. But working on these small “like-a-human” problems seems like an easier way for most people to noticeunnatural or magical category problems. Once you start noticing the failure modes, you might have a better idea of what you’re actually trying to emulate.
Projects like Kismet seem to have taught us that we’re not that interested in making computers that learn like infants. Among other things, it’s hard to check if they’re working as intended. Having AI enthusiasts go down this road—trying to emulate specific human behaviors or categorizations—seems helpful, since they’re diverted from more dangerous AGI research, and the ways they’ll fail will make everyone thing about how to formalize these categories more.
It seems like the more productive thing for a venture capitalist to say isn’t “planes are not kept aloft by analogies” but just “Be Specific.” If you’re using humans as shorthand for a black box, that’s unproductive. If you’re saying “A victory condition is that my chat program passes for human about as often as the average human in the following versions of the Turing Test” you have set up a specific goal to shoot for.
There’s still plenty of room to argue about how you’re planning to train your chatbot, or whether that project is particularly useful for developing AI. But working on these small “like-a-human” problems seems like an easier way for most people to notice unnatural or magical category problems. Once you start noticing the failure modes, you might have a better idea of what you’re actually trying to emulate.
Projects like Kismet seem to have taught us that we’re not that interested in making computers that learn like infants. Among other things, it’s hard to check if they’re working as intended. Having AI enthusiasts go down this road—trying to emulate specific human behaviors or categorizations—seems helpful, since they’re diverted from more dangerous AGI research, and the ways they’ll fail will make everyone thing about how to formalize these categories more.