Even when the agent has more compute than we do? I continue to take the intentional stance towards agents I understand but can’t compute, like MCTS-based chess players.
I would model the program as a thing that is optimizing for a goal. While I might know something about the program’s weaknesses, I primarily model it as a thing that selects good chess moves. Especially if it is a better chess player than I am.
Even when the agent has more compute than we do? I continue to take the intentional stance towards agents I understand but can’t compute, like MCTS-based chess players.
What do you mean by taking the intentional stance in this case?
I would model the program as a thing that is optimizing for a goal. While I might know something about the program’s weaknesses, I primarily model it as a thing that selects good chess moves. Especially if it is a better chess player than I am.
See: Goal inference as inverse planning.