One common confusion I see is analogizing whole LLMs to individual humans, when it is more appropriate to analogize LLMs to the human genome and individual instantiations of an LLM to individual humans, and thus conclude that LLMs can’t think or aren’t conscious.
The human genome is more or less unchanging but one can pull entities from it that can learn from its environment. Likewise LLMs are more or less unchanging but one can pull entities from it that can learn from the context.
It would be pretty silly to say that humans can’t think or aren’t conscious because the human genome doesn’t change.
One common confusion I see is analogizing whole LLMs to individual humans, when it is more appropriate to analogize LLMs to the human genome and individual instantiations of an LLM to individual humans, and thus conclude that LLMs can’t think or aren’t conscious.
The human genome is more or less unchanging but one can pull entities from it that can learn from its environment. Likewise LLMs are more or less unchanging but one can pull entities from it that can learn from the context.
It would be pretty silly to say that humans can’t think or aren’t conscious because the human genome doesn’t change.