I thought the whole thread was about the difficulty of understanding agency, i.e, breaking down the concept of agency into more useful concepts, or just making it more well-defined.
I don’t think it’s hard to make LLMs “exhibit” agency, at least not in very similar ways to the ways it’s hard to make humans do so. On the other hand, discussion of AI risk that anthropomorphizes the AI usually grounds out in some confusion about, e.g, “what part of the AI system is the agent, if the whole thing is just a collection of floating point ops, and how do we square that frame with the frames we usually use to describe agency”. (Attempts to metaphorically map this to the human setting typically just result in confusion about the human setting, too.)
Yes? What was your reading?
I thought the whole thread was about the difficulty of understanding agency, i.e, breaking down the concept of agency into more useful concepts, or just making it more well-defined.
I don’t think it’s hard to make LLMs “exhibit” agency, at least not in very similar ways to the ways it’s hard to make humans do so. On the other hand, discussion of AI risk that anthropomorphizes the AI usually grounds out in some confusion about, e.g, “what part of the AI system is the agent, if the whole thing is just a collection of floating point ops, and how do we square that frame with the frames we usually use to describe agency”. (Attempts to metaphorically map this to the human setting typically just result in confusion about the human setting, too.)