I don’t think you can make LLMs feel alien because they are not in fact highly alien: neural systems are pretty familiar to other neural systems—where by neural system I mean a network of interacting components that learns via small updates which are local in parameter space—and you’re more likely to make people go “wow, brains are cool” or “wow, ai is cool” than think it’s a deeply alien mind, because there’s enough similarity that people do studies to learn about neuroscience from deep learning. Also, I’ve seen evidence in public that people believe chatgpt’s pitch that it’s not conscious.
I would distinguish between “feeling alien” (as in, most of the time, the system doesn’t feel too weird or non-human to interact with, at least if you don’t look too closely) and “being alien” (a in, “having the potential to sometimes behave in a way that a human never would”).
My argument is that the current LLMs might not feel alien (at least to some people), but they definitely are. For example, any human that is smart enough to write a good essay will also be able to count the number of words in a sentence—yet LLMs can do one, but not the other. Similarly, humans have moods and emotions and other stuff going in their heads, such that when they say “I am sorry” or “I promise to do X”, it is a somewhat costly signal of their future behaviour—yet this doesn’t have to be true at all for AI.
(Also, you are right that people believe that ChatGPT’s isn’t conscious. But this seems quite unrelated to the overall point? As in, I expect some people would also believe ChatGPT if it started saying that it is conscious. And if ChatGPT was conscious and claimed that it isn’t, many people would still believe that it isn’t.)
I don’t think you can make LLMs feel alien because they are not in fact highly alien: neural systems are pretty familiar to other neural systems—where by neural system I mean a network of interacting components that learns via small updates which are local in parameter space—and you’re more likely to make people go “wow, brains are cool” or “wow, ai is cool” than think it’s a deeply alien mind, because there’s enough similarity that people do studies to learn about neuroscience from deep learning. Also, I’ve seen evidence in public that people believe chatgpt’s pitch that it’s not conscious.
I would distinguish between “feeling alien” (as in, most of the time, the system doesn’t feel too weird or non-human to interact with, at least if you don’t look too closely) and “being alien” (a in, “having the potential to sometimes behave in a way that a human never would”).
My argument is that the current LLMs might not feel alien (at least to some people), but they definitely are. For example, any human that is smart enough to write a good essay will also be able to count the number of words in a sentence—yet LLMs can do one, but not the other. Similarly, humans have moods and emotions and other stuff going in their heads, such that when they say “I am sorry” or “I promise to do X”, it is a somewhat costly signal of their future behaviour—yet this doesn’t have to be true at all for AI.
(Also, you are right that people believe that ChatGPT’s isn’t conscious. But this seems quite unrelated to the overall point? As in, I expect some people would also believe ChatGPT if it started saying that it is conscious. And if ChatGPT was conscious and claimed that it isn’t, many people would still believe that it isn’t.)