Personally, I’m against unqualified usage of “uncertainty” in relation to consciousness—it conflates factual uncertainty with ethical uncertainty. And it mostly ethical uncertainty that needs to be worked on: it’s not like there couldn’t be relevant questions about specifics of information processing in LLMs, but without consensus about what we value in humans empirical research in the name of helping with ethical questions is mostly a distraction motivated by moral-realistic thinking. We have enough knowledge about brain to decide on solutions to simple ethical questions now—you don’t need more neuroscience to decide why you wouldn’t value global workspace implemented in a couple of lines of code that takes 2 MB of RAM.
Personally, I’m against unqualified usage of “uncertainty” in relation to consciousness—it conflates factual uncertainty with ethical uncertainty. And it mostly ethical uncertainty that needs to be worked on: it’s not like there couldn’t be relevant questions about specifics of information processing in LLMs, but without consensus about what we value in humans empirical research in the name of helping with ethical questions is mostly a distraction motivated by moral-realistic thinking. We have enough knowledge about brain to decide on solutions to simple ethical questions now—you don’t need more neuroscience to decide why you wouldn’t value global workspace implemented in a couple of lines of code that takes 2 MB of RAM.