[Question] Will the first AGI agent have been designed as an agent (in addition to an AGI)?

I wonder about a scenario where the first AI with human or superior capabilities would be nothing goal-oriented, eg a language model like GPT. Then one instance of it would be used, possibly by a random user, to make a conversational agent told to behave as a goal-oriented AI. The bot would then behave as an AGI agent with everything that implies from a safety standpoint, eg using its human user to affect the outside world.

Is this a plausible scenario for the development of AGI and the first goal-oriented AGI? Does it have any implication regarding AI safety compared to the case of an AGI designed as goal-oriented from the start?

No comments.