I see what you mean, but I’m having trouble understanding what “a simplified version of Strong AI” would look like.
For example, can we consider a natural language processing system that’s connected to a modern search engine to be “a simplified version of Strong AI” ? Such a system is obviously not generally intelligent, but it does perform several important functions—such as natural language processing—that would pretty much be a requirement for any AGI. However, the implementation of such a system is most likely not generalizable to an AGI (if it were, we’d have AGI by now). So, can we consider it to be a “toy project”, or not ?
I see what you mean, but I’m having trouble understanding what “a simplified version of Strong AI” would look like.
For example, can we consider a natural language processing system that’s connected to a modern search engine to be “a simplified version of Strong AI” ? Such a system is obviously not generally intelligent, but it does perform several important functions—such as natural language processing—that would pretty much be a requirement for any AGI. However, the implementation of such a system is most likely not generalizable to an AGI (if it were, we’d have AGI by now). So, can we consider it to be a “toy project”, or not ?