I am sorry Tim, that I am not properly respectful of your hero Eliezer Yudkowsky or the AGI fields claims. The fact that humans are intelligent in no way proves AGI. If you can’t comprehend that AGI is an engineering problem then you really are out of touch with what is going on. The only way to prove that AGI is possible is to build it until then its just a belief nothing more there is no necessity attached to it.
We will build very intelligent machines—I take that for granted. If you think otherwise, fine—but don’t expect a debate on the issue from me. Nor does this seem like the right place for such a discussion. Perhaps try comp.ai.philosophy—they seem to like such banter over there.
I am sorry Tim, that I am not properly respectful of your hero Eliezer Yudkowsky or the AGI fields claims. The fact that humans are intelligent in no way proves AGI. If you can’t comprehend that AGI is an engineering problem then you really are out of touch with what is going on. The only way to prove that AGI is possible is to build it until then its just a belief nothing more there is no necessity attached to it.
We will build very intelligent machines—I take that for granted. If you think otherwise, fine—but don’t expect a debate on the issue from me. Nor does this seem like the right place for such a discussion. Perhaps try comp.ai.philosophy—they seem to like such banter over there.