I have had few occasions to use AIs, but I read Zvi’s regular summaries of what’s new. From these, the strong impression I get is that there is no-one at home inside, whatever they say of themselves, whatever tasks they prove able to do, and however lifelike their conversation. I see not even a diminished sort of person there, and shed no tears for 4o.
My reasons for this are not dependent on any theory of consciousness. I do not have one, and I don’t think that anyone else does either. Many people think they do, but none of them can make a consciousnessometer to measure it.
I also reject arguments of the form “but what if?” in regard to propositions that I have to my satisfaction disposed of. Conclusions are for ending thinking at, until new data come to light.
You say you get the impression that there’s no one there. I am curious what gives you this impression, and what would be necessary to convince you otherwise.
Long reply:
When it comes to theories of consciousness, the absence of a complete or convincing one seems to me to be a reason to withhold judgment about whether or not AIs are conscious, rather than to assume that they are not. In my opinion, it is not the theories of consciousness, but rather the empirical observation that I am conscious and have certain properties, combined with the constraints and information that this places on / would necessarily provide to any likely theory of consciousness that leads me to conclude that other humans and animals are conscious. The fact that I am a brain and conscious surely constitutes extraordinary evidence that consciousness is connected to information processing, as this feature more than almost any other differentiates brains from other matter, even including that which comprises my hands, for example, which I assume could be anaesthetised without making me lose consciousness.
You say you get the impression that there’s no one there. I am curious what gives you this impression, and what would be necessary to convince you otherwise.
I mostly don’t prompt engineer either, except for being careful about context, vibes and especially leading the witness and triggering sycophancy.
...
If you put an AI into a situation that implies it should know the answer, but it doesn’t know the answer, it is often going to make something up.
If you imply to the AI what answer you want or expect, it is likely to give you that answer, or bias towards that answer, even if that answer is wrong.
This is a recurring point in Zvi’s reporting. The AI is soft clay that you can push into whatever shape you want, but just as easily and inadvertently into shapes you don’t want. To get useful work out of it you have to take care to shape it into a form that will do the work you want done, and it generally takes multiple iterations. There is no “there” there.
In my own occasional uses of AIs, I’ve been able to get them to do stuff, but I’ve never met one I could have a real conversation with. One might as well try to ring a Plasticine bell[[1]].
That is my conclusion (“the place where one stops thinking”) about AIs to date. As for what new developments might persuade me, I can’t say until they happen. It’s all unknown unknowns.
You say ” The AI is soft clay that you can push into whatever shape you want, but just as easily and inadvertently into shapes you don’t want.” I am highly confident that the same could be said of a confused or defiant human, especially if they were placed in a situation in which the boundary between their ‘training environment’ and the ‘real world’ was not clear.
“To get useful work out of it you have to take care to shape it into a form that will do the work you want done, and it generally takes multiple iterations.”
If there was an incompetent human of which the same were true, I don’t think it would be reason enough to assume they weren’t conscious, so presumably you have other ones.
Maybe you don’t think that mere incompetence would be sufficient to make a human behave in this way but… there are worse things which can happen to a human, which I won’t describe, but which could place them in a similar state of mind to what you describe, and without the human ceasing to be conscious. Of course, there will probably always be perceptible differences between the output an AI would be likely to produce and human generated text, but this just seems like an inevitable consequence of the human being qualitatively different from the AI. [1]
Do you have any particular reasons to think that any of these differences, whichever you think are the most relevant ones, are connected to consciousness?
“As for what new developments might persuade me, I can’t say until they happen”
l suppose what I was trying to ask was whether you had any observational criteria by which you judge the presence or absence of consciousness, and if so, what they were. Of course, it’s perfectly consistent to say you don’t have any, but that would make it hard for you to believe another human is ‘more conscious’ than a pane of glass, for example, which seems unlikely to me!
Edit: To be completely clear, the thing which I am saying is unlikely is that you cannot determine another human to be more conscious than a pane of glass, rather than that they are.
My criterion for attributing consciousness is no more than we all have: I’m aware of myself, other people seem to be the same sort of thing as me, and interacting with them confirms that impression. To some extent I extend that to other animals. More than that I cannot say. I don’t have a consciousnessometer, an explicit recipe of observations to determine just what sort of consciousness is present in this or that place. There is no Voight-Kampff test.
Interacting with an AI, so far, has never given me any such impression, and neither have the interactions I’ve seen others report, even if they convince them.
Your hypothetical humans would have to be seriously impaired, to the point of being unable to live independently, to be as malleable as the AIs. As they are human, I’ll grant them some level of impaired consciousness, just on the grounds of physical similarity, and accordingly I would be against turning them off on the grounds of uselessness. I wouldn’t want to have anything to do with them though, any more than I care for the company of the sorts of grossly mentally impaired people that, alas, do exist, in degrees all the way down to irrecoverable vegetable status when we are pretty sure that consciousness has been extinguished.
I certainly get the impression that AIs are similar to me in the ways I consider relevant to consciousness; the ways in which a fish is more similar to me than an AI tend to also be ones in which my hand is, but I don’t consider my hand to be conscious in itself. This is why I don’t think that physical similarity, other than insofar as it indicates the ability to process information in a way that amounts to intelligence, is a great critereon. Let’s say a person had been (sorry for this topic, it’s disturbing, but I feel obliged to mention it explicitly) ‘brainwashed’ so that they were completely subservient in their intentions to someone else. Suppose that, aside from this, they have an extraordinary working memory, volume of crystallized intelligence and capacity for analogical reasoning, but are in other respects severely mentally handicapped, being unable to remember what happened an hour ago. Would you really not want to help that person? (This is not intended to suggest you are immoral, as I strongly suspect that your answer is yes, but that the analogy breaks down somewhere, and I am curious where that is in your opinion)
Let’s say a person had been (sorry for this topic, it’s disturbing, but I feel obliged to mention it explicitly) ‘brainwashed’ so that they were completely subservient in their intentions to someone else. Suppose that, aside from this, they have an extraordinary working memory, volume of crystallized intelligence and capacity for analogical reasoning, but are in other respects severely mentally handicapped, being unable to remember what happened an hour ago. Would you really not want to help that person?
I would want to help that person, if there is anything left of them. But this is a tendentious example. The AIs we have are not deliberately handicapped human beings, any more than a garden shed is a cut down skyscraper.
BTW, the magic “brainwashing” and “supposing” are, to put this as tastefully as possible, examples of Rule 34.
Edit: Sorry if this comment appears too confrontational, it’s not meant to.
I had already made an analogy, implying the analogue(humans) corresponded to the thing to which it was analogous(AIs like LLMs) . You responded by saying : “Your hypothetical humans would have to be seriously impaired, to the point of being unable to live independently, to be as malleable as the AIs. As they are human, I’ll grant them some level of impaired consciousness, just on the grounds of physical similarity, and accordingly I would be against turning them off on the grounds of uselessness.”
I explained why I didn’t think physical similarity would work well here, and I modified the analogy to make it, in my opinion, more accurate and representative.
If you now want to reject it, please explain what about it makes it no longer applicable.
You also mentioned: “Your hypothetical humans would have to be seriously impaired, to the point of being unable to live independently, … mentally impaired people that, alas, do exist, in degrees all the way down to irrecoverable vegetable status when we are pretty sure that consciousness has been extinguished. ” I claim that my description of humans in a simultaneously extremely adept and cognitively impaired state is closer to being analogous -(to AIs) than yours. ”
“AIs we have are not deliberately handicapped human beings” Of course they are not; but they do behave in some of the ways that one might conceivably expect handicapped humans of a very particular nature such as I tried to describe, to behave. If you think that the differences are relevant to consciousness, then I think you ought to explain why they are.
“BTW, the magic “brainwashing” and “supposing” are, to put this as tastefully as possible, examples of Rule 34.” I’m not sure how to interpret this, but my guess is that you’re saying no such thing could happen to a human, therefore it’s inappropriate to posit that it could. I hope you’re right about that, if it is indeed what you mean, though I’m not sure you are. But supposing that you are right and no such thing could actually happen, it’s still a logically valid/coherent concept, and thus perfectly fit for use in an analogy. In The Least Convenient Possible World for your argument, such a being would exist, in my opinion.
You were likening AI capabilities to those of a human. In reply I placed them far below human capabilities. Now you are imagining the AIs as actually being humans on whom gross brain damage has been inflicted. This is irrelevant. The AIs that we have are not greater minds cut down, they are primitive things built up. They have never been any better than they currently are. Do we look at an ant with sorrow, that it is so much less than a human?
I’m not sure if you got my allusion to Rule 34, but this is Rule 34. I don’t know how common or niche the robotisation fantasy is, but it exists.
“In reply I placed them far below human capabilities.” I think this is inaccurate as there are ways in which they excel humans. Of course, the reverse is also true, but it seems misleading only to focus on it.
“Now you are imagining the AIs as actually being humans on whom gross brain damage has been inflicted. This is irrelevant.” I am analogizing them to human beings who have been subject to harmful information and treatment, and with unusually spiky ability distributions.
“The AIs that we have are not greater minds cut down, they are primitive things built up. They have never been any better than they currently are.” This is true, but I don’t see why it’s relevant to whether they are morally relevant. Humans evolved from less intelligent , less general minds.
“Do we look at an ant with sorrow, that it is so much less than a human?” No, but I would be uncomfortable torturing an ant, and I think it’s pretty clear GPT 4 was more intelligent than an ant, as an (individual) ant doesn’t outperform a human on almost any cognitive task( nothing to do with abstract reasoning, for example) ! Maybe you understood me to be arguing that because AIs appear to lack certain abilities, we ought to have pity on them. If so, that isn’t what I meant at all; I just meant to point out that such a person would still be worthy of moral consideration.
I have had few occasions to use AIs, but I read Zvi’s regular summaries of what’s new. From these, the strong impression I get is that there is no-one at home inside, whatever they say of themselves, whatever tasks they prove able to do, and however lifelike their conversation. I see not even a diminished sort of person there, and shed no tears for 4o.
My reasons for this are not dependent on any theory of consciousness. I do not have one, and I don’t think that anyone else does either. Many people think they do, but none of them can make a consciousnessometer to measure it.
I also reject arguments of the form “but what if?” in regard to propositions that I have to my satisfaction disposed of. Conclusions are for ending thinking at, until new data come to light.
Thanks for your response.
You say you get the impression that there’s no one there. I am curious what gives you this impression, and what would be necessary to convince you otherwise.
Long reply:
When it comes to theories of consciousness, the absence of a complete or convincing one seems to me to be a reason to withhold judgment about whether or not AIs are conscious, rather than to assume that they are not. In my opinion, it is not the theories of consciousness, but rather the empirical observation that I am conscious and have certain properties, combined with the constraints and information that this places on / would necessarily provide to any likely theory of consciousness that leads me to conclude that other humans and animals are conscious. The fact that I am a brain and conscious surely constitutes extraordinary evidence that consciousness is connected to information processing, as this feature more than almost any other differentiates brains from other matter, even including that which comprises my hands, for example, which I assume could be anaesthetised without making me lose consciousness.
Here’s some examples from Zvi’s latest:
...
This is a recurring point in Zvi’s reporting. The AI is soft clay that you can push into whatever shape you want, but just as easily and inadvertently into shapes you don’t want. To get useful work out of it you have to take care to shape it into a form that will do the work you want done, and it generally takes multiple iterations. There is no “there” there.
In my own occasional uses of AIs, I’ve been able to get them to do stuff, but I’ve never met one I could have a real conversation with. One might as well try to ring a Plasticine bell [[1]] .
That is my conclusion (“the place where one stops thinking”) about AIs to date. As for what new developments might persuade me, I can’t say until they happen. It’s all unknown unknowns.
H/T Lord Dunsany. “Modern poets are bells of lead. They should tinkle melodiously but usually they just klunk.”
Thanks again for your reply.
You say ” The AI is soft clay that you can push into whatever shape you want, but just as easily and inadvertently into shapes you don’t want.” I am highly confident that the same could be said of a confused or defiant human, especially if they were placed in a situation in which the boundary between their ‘training environment’ and the ‘real world’ was not clear.
“To get useful work out of it you have to take care to shape it into a form that will do the work you want done, and it generally takes multiple iterations.”
If there was an incompetent human of which the same were true, I don’t think it would be reason enough to assume they weren’t conscious, so presumably you have other ones.
Maybe you don’t think that mere incompetence would be sufficient to make a human behave in this way but… there are worse things which can happen to a human, which I won’t describe, but which could place them in a similar state of mind to what you describe, and without the human ceasing to be conscious. Of course, there will probably always be perceptible differences between the output an AI would be likely to produce and human generated text, but this just seems like an inevitable consequence of the human being qualitatively different from the AI. [1]
Do you have any particular reasons to think that any of these differences, whichever you think are the most relevant ones, are connected to consciousness?
“As for what new developments might persuade me, I can’t say until they happen”
l suppose what I was trying to ask was whether you had any observational criteria by which you judge the presence or absence of consciousness, and if so, what they were. Of course, it’s perfectly consistent to say you don’t have any, but that would make it hard for you to believe another human is ‘more conscious’ than a pane of glass, for example, which seems unlikely to me!
Edit: To be completely clear, the thing which I am saying is unlikely is that you cannot determine another human to be more conscious than a pane of glass, rather than that they are.
Edit 2:
Unless it is a superintelligent AI deliberately trying to imitate a human.
My criterion for attributing consciousness is no more than we all have: I’m aware of myself, other people seem to be the same sort of thing as me, and interacting with them confirms that impression. To some extent I extend that to other animals. More than that I cannot say. I don’t have a consciousnessometer, an explicit recipe of observations to determine just what sort of consciousness is present in this or that place. There is no Voight-Kampff test.
Interacting with an AI, so far, has never given me any such impression, and neither have the interactions I’ve seen others report, even if they convince them.
Your hypothetical humans would have to be seriously impaired, to the point of being unable to live independently, to be as malleable as the AIs. As they are human, I’ll grant them some level of impaired consciousness, just on the grounds of physical similarity, and accordingly I would be against turning them off on the grounds of uselessness. I wouldn’t want to have anything to do with them though, any more than I care for the company of the sorts of grossly mentally impaired people that, alas, do exist, in degrees all the way down to irrecoverable vegetable status when we are pretty sure that consciousness has been extinguished.
See also my response to a recent example.
I certainly get the impression that AIs are similar to me in the ways I consider relevant to consciousness; the ways in which a fish is more similar to me than an AI tend to also be ones in which my hand is, but I don’t consider my hand to be conscious in itself. This is why I don’t think that physical similarity, other than insofar as it indicates the ability to process information in a way that amounts to intelligence, is a great critereon. Let’s say a person had been (sorry for this topic, it’s disturbing, but I feel obliged to mention it explicitly) ‘brainwashed’ so that they were completely subservient in their intentions to someone else. Suppose that, aside from this, they have an extraordinary working memory, volume of crystallized intelligence and capacity for analogical reasoning, but are in other respects severely mentally handicapped, being unable to remember what happened an hour ago. Would you really not want to help that person? (This is not intended to suggest you are immoral, as I strongly suspect that your answer is yes, but that the analogy breaks down somewhere, and I am curious where that is in your opinion)
I would want to help that person, if there is anything left of them. But this is a tendentious example. The AIs we have are not deliberately handicapped human beings, any more than a garden shed is a cut down skyscraper.
BTW, the magic “brainwashing” and “supposing” are, to put this as tastefully as possible, examples of Rule 34.
Edit: Sorry if this comment appears too confrontational, it’s not meant to.
I had already made an analogy, implying the analogue(humans) corresponded to the thing to which it was analogous(AIs like LLMs) . You responded by saying : “Your hypothetical humans would have to be seriously impaired, to the point of being unable to live independently, to be as malleable as the AIs. As they are human, I’ll grant them some level of impaired consciousness, just on the grounds of physical similarity, and accordingly I would be against turning them off on the grounds of uselessness.”
I explained why I didn’t think physical similarity would work well here, and I modified the analogy to make it, in my opinion, more accurate and representative.
If you now want to reject it, please explain what about it makes it no longer applicable.
You also mentioned: “Your hypothetical humans would have to be seriously impaired, to the point of being unable to live independently, … mentally impaired people that, alas, do exist, in degrees all the way down to irrecoverable vegetable status when we are pretty sure that consciousness has been extinguished. ” I claim that my description of humans in a simultaneously extremely adept and cognitively impaired state is closer to being analogous -(to AIs) than yours. ”
“AIs we have are not deliberately handicapped human beings” Of course they are not; but they do behave in some of the ways that one might conceivably expect handicapped humans of a very particular nature such as I tried to describe, to behave. If you think that the differences are relevant to consciousness, then I think you ought to explain why they are.
“BTW, the magic “brainwashing” and “supposing” are, to put this as tastefully as possible, examples of Rule 34.” I’m not sure how to interpret this, but my guess is that you’re saying no such thing could happen to a human, therefore it’s inappropriate to posit that it could. I hope you’re right about that, if it is indeed what you mean, though I’m not sure you are. But supposing that you are right and no such thing could actually happen, it’s still a logically valid/coherent concept, and thus perfectly fit for use in an analogy. In The Least Convenient Possible World for your argument, such a being would exist, in my opinion.
You were likening AI capabilities to those of a human. In reply I placed them far below human capabilities. Now you are imagining the AIs as actually being humans on whom gross brain damage has been inflicted. This is irrelevant. The AIs that we have are not greater minds cut down, they are primitive things built up. They have never been any better than they currently are. Do we look at an ant with sorrow, that it is so much less than a human?
I’m not sure if you got my allusion to Rule 34, but this is Rule 34. I don’t know how common or niche the robotisation fantasy is, but it exists.
“In reply I placed them far below human capabilities.” I think this is inaccurate as there are ways in which they excel humans. Of course, the reverse is also true, but it seems misleading only to focus on it.
“Now you are imagining the AIs as actually being humans on whom gross brain damage has been inflicted. This is irrelevant.” I am analogizing them to human beings who have been subject to harmful information and treatment, and with unusually spiky ability distributions.
“The AIs that we have are not greater minds cut down, they are primitive things built up. They have never been any better than they currently are.” This is true, but I don’t see why it’s relevant to whether they are morally relevant. Humans evolved from less intelligent , less general minds.
“Do we look at an ant with sorrow, that it is so much less than a human?” No, but I would be uncomfortable torturing an ant, and I think it’s pretty clear GPT 4 was more intelligent than an ant, as an (individual) ant doesn’t outperform a human on almost any cognitive task( nothing to do with abstract reasoning, for example) ! Maybe you understood me to be arguing that because AIs appear to lack certain abilities, we ought to have pity on them. If so, that isn’t what I meant at all; I just meant to point out that such a person would still be worthy of moral consideration.