It is much simpler to program a goal system that responds to direct commands than to somehow try to infuse ‘friendliness’ into the AI.
If the AI receives commands frequently it AI would be weak—and probably not very competitive. It would be like a child running to its mummy all the time. To make decisions fast, that sort of thing is not on the cards.
If the AI receives commands infrequently, that’s more-or-less what is under discussion.
However, AIs can be expected to naturally defend their goals. It may be best not to provide a convenient interface for changing them—since it could also be used to hijack the AI. That’s especially true if the AI is deployed into “uncertain” territory—e.g. as a consumer robot’s brain. We wouldn’t want consumers to be able to reprogram the AI to kill people—that would not reflect well on the robot company’s image.
If the AI receives commands frequently it AI would be weak—and probably not very competitive. It would be like a child running to its mummy all the time. To make decisions fast, that sort of thing is not on the cards.
If the AI receives commands infrequently, that’s more-or-less what is under discussion.
However, AIs can be expected to naturally defend their goals. It may be best not to provide a convenient interface for changing them—since it could also be used to hijack the AI. That’s especially true if the AI is deployed into “uncertain” territory—e.g. as a consumer robot’s brain. We wouldn’t want consumers to be able to reprogram the AI to kill people—that would not reflect well on the robot company’s image.