Case 1: Take your LLM with structure A, trained with method B, on data C, maybe random draws wherever relevant D. Assume the world is such that it is unconscious (say because illusionism is true, or because for whatever reason, that specific type of LLM thing doesn’t lead to consciousness). What will you observe as output? Well whatever you have observed. Let’s call it E.
Case 2: Take the same LLM with A, B, C, D. But assume the world is such that it is phenomenally conscious. What will you observe as output? Well, if A, B, C, D are the same as before, then it will output: Exactly E still.
Case 1: Take your LLM with structure A, trained with method B, on data C, maybe random draws wherever relevant D. Assume the world is such that it is unconscious (say because illusionism is true, or because for whatever reason, that specific type of LLM thing doesn’t lead to consciousness). What will you observe as output? Well whatever you have observed. Let’s call it E.
Case 2: Take the same LLM with A, B, C, D. But assume the world is such that it is phenomenally conscious. What will you observe as output? Well, if A, B, C, D are the same as before, then it will output: Exactly E still.
Conclusion: You will never be able to properly infer from the basic LLM output whether the LLM is phenomenally conscious or not. Never.