Tool-AGI seems to be an an incoherent concept. If a Tool simply solves a given set of problems in the prespecified allowed ways (only solves GoogleMap problems, takes its existing data set as fixed, and has some pre-determined, safe set of simple actions it can take), then it’s a narrow AI.
Another way of putting this is that a “tool” has an underlying instruction set that conceptually looks like: “(1) Calculate which action A would maximize parameter P, based on existing data set D. (2) Summarize this calculation in a user-friendly manner, including what Action A is, what likely intermediate outcomes it would cause, what other actions would result in high values of P, etc.”
If an AGI is able to understand which of the intermediate outcomes an action may cause are important for people, and to summarize this information in a user-friendly manner, then building such AGI is FAI-complete.
I don’t think the concept is incoherent. What do you think of my more specific suggestion. Holden’s idea seems sufficiently different from other ideas that I don’t don’t think arguing about whether it is AGI or narrow AI is very useful.
Tool-AGI seems to be an an incoherent concept. If a Tool simply solves a given set of problems in the prespecified allowed ways (only solves GoogleMap problems, takes its existing data set as fixed, and has some pre-determined, safe set of simple actions it can take), then it’s a narrow AI.
If an AGI is able to understand which of the intermediate outcomes an action may cause are important for people, and to summarize this information in a user-friendly manner, then building such AGI is FAI-complete.
I don’t think the concept is incoherent. What do you think of my more specific suggestion. Holden’s idea seems sufficiently different from other ideas that I don’t don’t think arguing about whether it is AGI or narrow AI is very useful.