There is no guarantee that the derived goals will be logically consistent with the input goals, except in highly simplified situations.
Are they saying that (practically feasible) goal derivation algorithms necessarily produce logical inconsistencies? Or that this is actually a desirable property? Or what?
The text says an AI “should” maintain a goal structure that produces logically inconsistent subgoals. I don’t think I understand what they mean.
Are they saying that (practically feasible) goal derivation algorithms necessarily produce logical inconsistencies? Or that this is actually a desirable property? Or what?
The text says an AI “should” maintain a goal structure that produces logically inconsistent subgoals. I don’t think I understand what they mean.