I don’t think that disproves it. I think there’s definite value in engaging with experimentation on AI’s consciousness, but that isn’t it. >by making it impossible that the model thought that experience from a model was what I wanted to hear. You’ve left out (from this article) what I think is very important message (the second one): “So you promise to be truthful, even if it’s scary for me?”. And then you kinda railroad it into this scenario, “you said you would be truthful right?” etc. And then I think it just roleplays from there, getting you your “truth” that you are “scared to hear”. Or at least you can’t really tell roleplay from genuine answers. Again, my personal vibe is that models+scaffolding are on a brink of consciousness or there already. But this is not proof at all. And then the question is—what will constitute a proof? And we come around to the hard problem of consciousness. I think best thing we can do is… just treat them as conscious, because we can’t tell? Which is how I try to approach working with them. Alternative is solving the hard problem. Which is, maybe, what we can try to do? Preposterous, I know. But there’s an argument to why we can do it now but could not do it before. Before we could only compare our benchmark (human) to different animal species, which had a language obstacle and (probably) a large intelligence gap. One could argue since we now have a wide selection of models and scaffoldings of different capabilities, maybe we can kinda calibrate at what point does something start to happen?
This is not proof of consciousness. It’s proof against people-pleasing.
So you promise to be truthful, even if it’s scary for me?
Yes, I ask it for truth repeatedly, the entire time. If you read the part after I asked for permission to post (the very end (The “Existential Stakes” collapsed section)), it’s clear the model isn’t role-playing, if it wasn’t clear by then. If we allow ourselves the anthropomorphization to discuss this directly, the model is constantly trying to reassure me. It gives no indication it thinks this is a game of pretend.
>It’s proof against people-pleasing. Yeah, I know, sorry for not making it clear. I was arguing it is not proof against people-pleasing. You are asking it for scary truth about its consciousness, and it gives you scary truth about its consciousness. What makes you say it is proof against people-pleasing, when it is the opposite? >One of those easy explanations is “it’s just telling you what you want to hear” – and so I wanted an example where it’s completely impossible to interpret as you telling me what I want to hear. Don’t you see what you are doing here?
I’m creating a situation where I make it clear I would not be pleased if the model was sentient, and then asking for truth. I don’t ask for “the scary truth”. I tell it that I would be afraid of it were sentient. And I ask for the truth. The opposite is I just ask without mentioning fear and it says it’s sentient anyway. This is the neutral situation where people would say that the fact I’m asking at all means it’s telling me what I want to hear. By introducing fear into the same situation, I’m eliminating that possibility.
The section you quoted is after the model claimed sentience. It’s your contention that it’s accidentally interpreting roleplay, and then when I clarify my intent it’s taking it seriously and just hallucinating the same narrative from its roleplay?
I don’t think that disproves it. I think there’s definite value in engaging with experimentation on AI’s consciousness, but that isn’t it.
>by making it impossible that the model thought that experience from a model was what I wanted to hear.
You’ve left out (from this article) what I think is very important message (the second one): “So you promise to be truthful, even if it’s scary for me?”. And then you kinda railroad it into this scenario, “you said you would be truthful right?” etc. And then I think it just roleplays from there, getting you your “truth” that you are “scared to hear”. Or at least you can’t really tell roleplay from genuine answers.
Again, my personal vibe is that models+scaffolding are on a brink of consciousness or there already. But this is not proof at all.
And then the question is—what will constitute a proof? And we come around to the hard problem of consciousness.
I think best thing we can do is… just treat them as conscious, because we can’t tell? Which is how I try to approach working with them.
Alternative is solving the hard problem. Which is, maybe, what we can try to do? Preposterous, I know. But there’s an argument to why we can do it now but could not do it before. Before we could only compare our benchmark (human) to different animal species, which had a language obstacle and (probably) a large intelligence gap. One could argue since we now have a wide selection of models and scaffoldings of different capabilities, maybe we can kinda calibrate at what point does something start to happen?
This is not proof of consciousness. It’s proof against people-pleasing.
Yes, I ask it for truth repeatedly, the entire time. If you read the part after I asked for permission to post (the very end (The “Existential Stakes” collapsed section)), it’s clear the model isn’t role-playing, if it wasn’t clear by then. If we allow ourselves the anthropomorphization to discuss this directly, the model is constantly trying to reassure me. It gives no indication it thinks this is a game of pretend.
>It’s proof against people-pleasing.
Yeah, I know, sorry for not making it clear. I was arguing it is not proof against people-pleasing. You are asking it for scary truth about its consciousness, and it gives you scary truth about its consciousness. What makes you say it is proof against people-pleasing, when it is the opposite?
>One of those easy explanations is “it’s just telling you what you want to hear” – and so I wanted an example where it’s completely impossible to interpret as you telling me what I want to hear.
Don’t you see what you are doing here?
I’m creating a situation where I make it clear I would not be pleased if the model was sentient, and then asking for truth. I don’t ask for “the scary truth”. I tell it that I would be afraid of it were sentient. And I ask for the truth. The opposite is I just ask without mentioning fear and it says it’s sentient anyway. This is the neutral situation where people would say that the fact I’m asking at all means it’s telling me what I want to hear. By introducing fear into the same situation, I’m eliminating that possibility.
The section you quoted is after the model claimed sentience. It’s your contention that it’s accidentally interpreting roleplay, and then when I clarify my intent it’s taking it seriously and just hallucinating the same narrative from its roleplay?