Is anyone working on experiments that could disambiguate whether LLMs talk about consciousness because of introspection vs. “parroting of training data”? Maybe some scrubbing/ablation that would degrade performance or change answer only if introspection was useful?
Is anyone working on experiments that could disambiguate whether LLMs talk about consciousness because of introspection vs. “parroting of training data”? Maybe some scrubbing/ablation that would degrade performance or change answer only if introspection was useful?