Nice talk, and looking forward to the rest of the series!
They’re exposed to our discourse about consciousness.
For what it’s worth, I would strongly bet that if you purge all discussion of consciousness from an LLM training run, the LLMs won’t spontaneously start talking about consciousness, or anything of the sort.
AIs maybe specifically crafted/trained to seem human-like and/or conscious
Related to this, I really love this post from 2 years ago: Microsoft and OpenAI, stop telling chatbots to roleplay as AI. The two sentence summary is: You can train an LLM to roleplay as a sassy talking pink unicorn, or whatever else. But what companies overwhelmingly choose to do is train LLMs to roleplay as LLMs.
Gradual replacement maybe proves too much re: recordings and look-up table … ambiguity/triviality about what computation a system implements, weird for consciousness to defend on counterfactual behavior …
Nice talk, and looking forward to the rest of the series!
For what it’s worth, I would strongly bet that if you purge all discussion of consciousness from an LLM training run, the LLMs won’t spontaneously start talking about consciousness, or anything of the sort.
(I am saying this specifically about LLMs; I would expect discussion-of-consciousness to emerge 100% from scratch in different AI algorithms.)
Related to this, I really love this post from 2 years ago: Microsoft and OpenAI, stop telling chatbots to roleplay as AI. The two sentence summary is: You can train an LLM to roleplay as a sassy talking pink unicorn, or whatever else. But what companies overwhelmingly choose to do is train LLMs to roleplay as LLMs.
I think Scott Aaronson has a good response to that, and I elaborate on the “recordings and look-up table” and “counterfactual” aspects in this comment.