I don’t think this would be a worthwhile endeavor, because we already know that deep reinforcement learning can deal with these sorts of interface constraints as shown by Deepmind’s older work. I would expect the agent behavior to converge towards that of the current AI, but requiring more compute.
I think the question is about making the compute requirements comparable. One of the critisims of early AI work is about how using simple math on abstract things can seem very powerful if the abstractions are provided for it. But real humans have to extract the essential abstractions from the messy world. Consider a soldier robot that has to assign friend or foe classification to a humanoid as part of a decision to maybe shoot at it. That is a real subtask that giving a magic “label” would unfairly circumvent. In nature even if camouflage is imperfect it can be valuable and even if the animal is correctly identified as prey delaying the detection event or having the hunter hesitate can be valuable.
Also a game like QWOP is surprisingly diffcult for humans and giving a computer “just control over legs” would make the whole game trivial.
A lot of the starcraft technique also mirrors the games restrctions. Part of the point of control groups is to bypass screen zoom limitations. For example in Supreme Commander some of the particular kinds of limitations do not exist because you can zoom out to have the whole map on the screen at once and because providing attention to different parts of the battlefield has been made more handy (or atleast different (there are new problems such as “dots fighting dots” making it hard to see micro considerations))
Maybe you’re right… My sense is that it would converge toward the behavior of the current AI, but slower, especially for movements that require a lot of accuracy. There might be a simpler way to add that constraint without wasting compute, though.
I don’t think this would be a worthwhile endeavor, because we already know that deep reinforcement learning can deal with these sorts of interface constraints as shown by Deepmind’s older work. I would expect the agent behavior to converge towards that of the current AI, but requiring more compute.
I think the question is about making the compute requirements comparable. One of the critisims of early AI work is about how using simple math on abstract things can seem very powerful if the abstractions are provided for it. But real humans have to extract the essential abstractions from the messy world. Consider a soldier robot that has to assign friend or foe classification to a humanoid as part of a decision to maybe shoot at it. That is a real subtask that giving a magic “label” would unfairly circumvent. In nature even if camouflage is imperfect it can be valuable and even if the animal is correctly identified as prey delaying the detection event or having the hunter hesitate can be valuable.
Also a game like QWOP is surprisingly diffcult for humans and giving a computer “just control over legs” would make the whole game trivial.
A lot of the starcraft technique also mirrors the games restrctions. Part of the point of control groups is to bypass screen zoom limitations. For example in Supreme Commander some of the particular kinds of limitations do not exist because you can zoom out to have the whole map on the screen at once and because providing attention to different parts of the battlefield has been made more handy (or atleast different (there are new problems such as “dots fighting dots” making it hard to see micro considerations))
Maybe you’re right… My sense is that it would converge toward the behavior of the current AI, but slower, especially for movements that require a lot of accuracy. There might be a simpler way to add that constraint without wasting compute, though.