Great post. That Anakin meme is gold.
“Whenever you notice yourself saying ‘outside view’ or ‘inside view,’ imagine a tiny Daniel Kokotajlo hopping up and down on your shoulder chirping ‘Taboo outside view.’”
Somehow I know this will now happen automatically whenever I hear or read “outside view.” 😂
I agree pretty strongly with all of this, fwiw. I think Dennett/the intentional stance really gets at the core of what it means for a system to “be an agent”; essentially, a system is one to the extent it makes sense to model it as such, i.e. as having beliefs and preferences, and acting on those beliefs to achieve those preferences, etc. The very reason why we usually consider our selves and other humans to be “agents” is exactly because that’s the model over sensory data that the mind finds most reasonable to use, most of the time. In doing so, we actually are ascribing cognition to these systems, and in practice, of course we’ll need to understand how such behavior will actually be implemented in our AIs. (And thinking about how “goal-directed behavior” is implemented in humans/biological neural nets seems like a good place to mine for useful insights and analogies for this purpose.)