A lot of the distinction between a service and an agent seems to rest on the difference between thinking and doing.
That doesn’t seem right to me. There are several, potentially subtle differences between services and agents – the boundary (or maybe even ‘boundaries’) are probably nebulous at high resolution.
A good prototypical service is Google Translate. You submit text to it to translate and it outputs a translation as text. It’s both thinking and doing but the ‘doing’ is limited – it just outputs translated text.
A good prototypical agent is AlphaGo. It pursues a goal, to win a game of Go, but does so in a (more) open-ended fashion than a service. It will continue to play as long as it can.
I am aiming directly at questions of how an AI that starts with a only a robotic arm might get to controlling drones or trading stocks, from the perspective of the AI.
I think one thing to point out up-front is that a lot of current AI systems are generated or built in a stage distinct from the stage in which they ‘operate’. A lot of machine learning algorithms involve a distinct period of learning, first, which produces a model. That model can then be used – as a service. The model/service would do something like ‘tell me if an image is of a hot dog’. Or, in the case of AlphaGo, something like ‘given a game state X, what next move or action should be taken?’.
What makes AlphaGo an agent is that it’s model is operated in a mode whereby it’s continually fed a sequence of game states, and, crucially, both its output controls the behavior of a player in the game, and the next game state its given depends on it’s previous output. It becomes embedded or embodied via the feedback between its output, player behavior, and its subsequence input, a game state that includes the consequences of its previous output.
But, we’re still missing yet another crucial ingredient to make an agent truly (or at least more) dangerous – ‘online learning’.
Instead of training a model/service all at once up-front, we could train it while it acts as an agent or service, i.e. ‘online’.
I would be very surprised if an AI installed to control a robotic arm would gain control of drones or be able to trade stocks, but just because I would expect such an AI to not use online learning and to be overall very limited in terms of what inputs with which it’s provided (e.g. the position of the arm and maybe a camera covering its work area) and what outputs to which it has direct access (e.g. a sequence of arm motions to be performed).
Probably the most dangerous kind of tool/service AI imagined is an oracle AI, i.e. an AI to which people would pose general open-ended questions, e.g. ‘what should I do?’. For oracle AIs, I think some other (possibly) key dangerous ingredients might be present:
Knowledge of other oracle AIs (as a plausible stepping stone to the next ingredient)
Knowledge of itself as an oracle AI (and thus an important asset)
Knowledge of its own effects on the world, thru those that consult it, or those that are otherwise aware of its existence or its output
That doesn’t seem right to me. There are several, potentially subtle differences between services and agents – the boundary (or maybe even ‘boundaries’) are probably nebulous at high resolution.
A good prototypical service is Google Translate. You submit text to it to translate and it outputs a translation as text. It’s both thinking and doing but the ‘doing’ is limited – it just outputs translated text.
A good prototypical agent is AlphaGo. It pursues a goal, to win a game of Go, but does so in a (more) open-ended fashion than a service. It will continue to play as long as it can.
Down-thread, you wrote:
I think one thing to point out up-front is that a lot of current AI systems are generated or built in a stage distinct from the stage in which they ‘operate’. A lot of machine learning algorithms involve a distinct period of learning, first, which produces a model. That model can then be used – as a service. The model/service would do something like ‘tell me if an image is of a hot dog’. Or, in the case of AlphaGo, something like ‘given a game state X, what next move or action should be taken?’.
What makes AlphaGo an agent is that it’s model is operated in a mode whereby it’s continually fed a sequence of game states, and, crucially, both its output controls the behavior of a player in the game, and the next game state its given depends on it’s previous output. It becomes embedded or embodied via the feedback between its output, player behavior, and its subsequence input, a game state that includes the consequences of its previous output.
But, we’re still missing yet another crucial ingredient to make an agent truly (or at least more) dangerous – ‘online learning’.
Instead of training a model/service all at once up-front, we could train it while it acts as an agent or service, i.e. ‘online’.
I would be very surprised if an AI installed to control a robotic arm would gain control of drones or be able to trade stocks, but just because I would expect such an AI to not use online learning and to be overall very limited in terms of what inputs with which it’s provided (e.g. the position of the arm and maybe a camera covering its work area) and what outputs to which it has direct access (e.g. a sequence of arm motions to be performed).
Probably the most dangerous kind of tool/service AI imagined is an oracle AI, i.e. an AI to which people would pose general open-ended questions, e.g. ‘what should I do?’. For oracle AIs, I think some other (possibly) key dangerous ingredients might be present:
Knowledge of other oracle AIs (as a plausible stepping stone to the next ingredient)
Knowledge of itself as an oracle AI (and thus an important asset)
Knowledge of its own effects on the world, thru those that consult it, or those that are otherwise aware of its existence or its output