Current LLMs are probably not conscious but the problem is that we wouldn’t be able to tell if they were. All our heuristics for consciousness are applicable to creatures produced via evolution by natural selection, for which being smart is correlated with likelihood of being conscious. LLMs are not like that. We do not really know how to evaluate their consciousness yet. This is a worrisome state of events, and another reason to pause the develiopment of more powerful AIs until we properly understand current ones.
Consciousness is an emergent property.
This is a meaningless statement. Everything beyond individual quarks is an emergent property. There has to be a specific identifiable principle according to which execution of some algorithms produces consciousness while others does not. We need to discover it and make sure that AI we produce will not accidentally turn out to be conscious.
LLMs surely satisfy these requirements as AGI
Probably not LLMs themselves but Language Model Agents that can be build on top of them. I think you can get an AGI from modern LLMs by applying the right scaffolding.
The definition of AGI is inflated too much. It has to be better at every field than any human? Come on. You don’t have to be smarter than von Neumann to be a conscious human.
You shouldn’t confuse being a general reasoner with being conscious. One doesn’t acctually imply the other.
Current LLMs are probably not conscious but the problem is that we wouldn’t be able to tell if they were. All our heuristics for consciousness are applicable to creatures produced via evolution by natural selection, for which being smart is correlated with likelihood of being conscious. LLMs are not like that. We do not really know how to evaluate their consciousness yet. This is a worrisome state of events, and another reason to pause the develiopment of more powerful AIs until we properly understand current ones.
This is a meaningless statement. Everything beyond individual quarks is an emergent property. There has to be a specific identifiable principle according to which execution of some algorithms produces consciousness while others does not. We need to discover it and make sure that AI we produce will not accidentally turn out to be conscious.
Probably not LLMs themselves but Language Model Agents that can be build on top of them. I think you can get an AGI from modern LLMs by applying the right scaffolding.
You shouldn’t confuse being a general reasoner with being conscious. One doesn’t acctually imply the other.