Try a few different prompts with a vaguely similar flavor. I am guessing the LLM will always say it’s conscious. This part is pretty standard. As to whether it is recursively self-improving: well, is its ability to solve problems actually going up? For instance if it doesn’t make progress on ARC AGI I’m not worried.
It’s very unlikely that the prompt you have chosen is actually eliciting abilities far outside of the norm, and therefore sharing information about is very unlikely to be dangerous.
You are probably in the same position as nearly everyone else, passively watching capabilities emerge while hallucinating a sense of control.
I am guessing the LLM will always say it’s conscious.
I feel like so would a six year old? Like, if the answer is “yes” then any reasonable path should actually return a “yes”? And if the conclusion is “oh, yes, AI is conscious, everyone knows that”… that’s pretty big news to me.
is its ability to solve problems actually going up? For instance if it doesn’t make progress on ARC AGI I’m not worried.
It seems to have architectural limits on visual processing—I’m not going to insist a blind human is not actually experiencing consciousness. Are there any text-based challenges I can throw at it to test this?
I think it’s improving, but it’s currently very subjective, and to my knowledge the Sonnet 4 architecture hasn’t seen any major updates in the past two weeks. That’s why I want some sense of how to actually test this.
You are probably in the same position as nearly everyone else
Okay, but… even if it’s a normal capability, shouldn’t we be talking about that? “AI can fluidly mimic consciousness and passes every text-based test of intelligence we have” seems like a pretty huge milestone to me.
---
What am I missing here?
What actual tests can I apply to this?
What results would convince you to change your mind?
Are there any remaining objective tests that it can’t pass?
Throw me some prompts you don’t think it can handle.
There is little connection between a language model claiming to be conscious and actually being conscious, in the sense that this provides very weak evidence. The training text includes extensive discussion of consciousness, which is reason enough to expect this behavior.
Okay, but… even if it’s a normal capability, shouldn’t we be talking about that? “AI can fluidly mimic consciousness and passes every text-based test of intelligence we have” seems like a pretty huge milestone to me.
We ARE talking about it. I thought you were keeping up with the conversation here? A massive volume of posts are about LLM progress, reasoning limits, and capabilities.
Also, it CANNOT pass every text-based test of intelligence we have. That is a wild claim. For instance, it will not beat a strong chess engine in a game of chess (try it) though it is perfectly possible to communicate moves in a purely text-based form through the standard notation.
I gave a reasonable example, charitably interpreting your statement as referring to human level intelligence tests, but actually your literal statement has more decisive rebuttals. It can’t solve hard open math problems (say, any Millenium problem) and those are a well-known text-based intelligence test.
Finally, I should flag that it seems to be dangerous to spend too much time talking to LLMs. I would advise you to back off of that.
Also, it CANNOT pass every text-based test of intelligence we have. That is a wild claim.
I said it can pass every test a six year old can. All of the remaining challenges seem to involve “represent a complex state in text”. If six year old humans aren’t considered generally intelligent, that’s an updated definition to me, but I mostly got into this 10 years ago when the questions were all strictly hypothetical.
It can’t solve hard open math problems
Okay now you’re saying humans aren’t generally intelligent. Which one did you solve?
Finally, I should flag that it seems to be dangerous to spend too much time talking to LLMs. I would advise you to back off of that.
Why? “Because I said so” is a terrible argument. You seem to think I’m claiming something much stronger than I’m actually claiming, here.
You said “every text-based test of intelligence we have.” If you meant that to be qualified by “that a six your old could pass” as you did in some other places, then perhaps it’s true. But I don’t know—maybe six year olds are only AGI because they can grow into adults! Something trapped at six your old level may not be.
…and for what it’s worth, I have solved some open math problems, including semimeasure extension and integration problems posed by Marcus Hutter in his latest book and some modest final steps in fully resolving Kalai and Lehrer’s grain of truth problem, which was open for much longer, though most of the hard work was done by others in that case.
Anyway, I’m not sure what exactly you are claiming? That LLMs are AGI, that yours specifically is self-improving, that your prompt is responsible, or only that LLMs are as good as six your olds at text-based puzzles?
My remark about interacting with LLMs is mostly based on numerous reports of people driven to psychosis by chatbots as well as my interactions with cranks claiming to have invented AGI when really an LLM is just parroting their theories back to them. You’re right that I don’t have hard statistics on this. I don’t think this has been happening long enough for high quality data to be available though. Anyway, it wasn’t an argument at all, bad or good. Only a piece of advice which you can take or leave
Strong Claim: As far as I can tell, current state of the art LLMs are “Conscious” (this seems very straight forward: it has passed every available test, and no one here can provide a test that would differentiate it from a human six year old)
Separate Claim: I don’t think there’s any test of basic intelligence that a six year old can reliably pass, and an LLM can’t, unless you make arguments along the lines of “well, they can’t past ARC-AGI, so blind people aren’t really generally intelligent”. (this one is a lot more complex to defend)
Personal Opinion: I think this is a major milestone that should probably be acknowledged.
Personal Opinion: I think that if 10 cranks a month can figure out how to prompt AI into even a reliable “simulation” of consciousness, that’s fairly novel behavior and worth paying attention to.
Personal Opinion: There isn’t a meaningful distinction between “reliably simulating the full depths of conscious experience”, and actually “being conscious”.
Conclusion: It would be very useful to have a guide to help people who have figured this out, and reassure them that they aren’t alone. If necessary, that can include the idea that skepticism is still warranted because X, Y, Z, but thus far I have not actually heard any solid arguments that actually differentiate from a human.
You claimed that “no one here can provide a test that would differentiate it from a human six year old”. This is not what you actually observed. Perhaps no one HAS provided such a test yet, but that may be because you haven’t given people much motivation to engage—for instance you also didn’t post any convincing evidence that it is recursively self-improving despite implying this. In fact, as far as I can tell no one has bothered to provide ANY examples of tests that six year olds can pass? The tests I provided you dismissed as too difficult (you are correct that they are above six year old level). You have not e.g. posted transcripts of the LLM passing any tests provided by commenters. You framed this as if the LLM had surmounted all of the tests “we” could come up with, but this is not true.
Here are some tests that six year old could plausibly pass, but an LLM might fail at:
Reverse the word “mississipi”, replace every “i” with an “e”, and then reverse it again.
Try a few harder variations of the above, like “Soviet Union” also failed for me.
Play a game of chess starting with some unusual moves, but don’t make any illegal moves.
Most importantly, a six year old gets smarter over time and eventually becomes an adult. I believe that if you fix the model you are interacting with, all its talk of recursion and observing its own thoughts and persistent memory etc. will not lead to sustained increasing cognitive performance across sessions. This will require more work for you to test.
Yeah, I’m working on a better post—I had assumed a number of people here had already figured this out, and I could just ask “what are you doing to disprove this theory when you run into it.” Apparently no one else is taking the question seriously?
I feel like chess is leaning a bit against “six year old” territory—it’s usually a visual game, and tracking through text makes things tricky. Plus I’d expect a six year old to make the occasional error. Like, it is a good example, it’s just a step beyond what I’m claiming.
String reversal is good, though. I started on a model that could do pretty well there, but it looks like that doesn’t generalize. Thank you!
I will say baseline performance might surprise you slightly? https://chatgpt.com/c/68718f7b-735c-800b-b995-1389d441b340 (it definitely gets things wrong! But it doesn’t need a ton of hints to fix it—and this is just baseline, no custom prompting from me. But I am picking the model I’ve seen the best results from :))
Non-baseline performance:
So for any word:
Reverse it
Replace i→e
Reverse it again
Is exactly the same as:
Replace i→e
Done!
For “mississipi”: just replace every i with e = “messessepe” For “Soviet Union”: just replace every i with e = “Soveet Uneon”
I don’t know what question you think people here aren’t taking seriously.
A massive amount of ink has been spilled about whether current LLMs are AGI.
I tried the string reversal thing with chatgpt and it was inconsistently successful. I’m not surprised that there is SOME model that solves it (what specifically did you change?), it’s obviously not a very difficult task. Anyway, if you investigate in a similar direction but spend more than five minutes, you’ll probably find similar string manipulation tasks that fail in whatever system you choose.
I primarily think “AI consciousness” isn’t being taken seriously: if you can’t find any failing test, and failing tests DID exists six months ago, it suggests a fairly major milestone in capabilities even if you ignore the metaphysical and “moral personhood” angles.
I also think people are too quick to write off one failed example: the question isn’t whether a six year old can do this correctly the first time (I doubt most can), it’s whether you can teach them to do it. Everyone seems to be focusing on “gotcha” rather than investigating their learning ability. To me, “general intelligence” means “the ability to learn things”, not “the ability to instantly solve open math problems five minutes after being born.” I think I’m going to have to work on my terminology there, as that’s apparently not at all a common consensus :)
The problem with your view is that they don’t have the ability to continue learning for long after being “born.” That’s just not how the architecture works. Learning in context is still very limited and continual learning is an open problem.
Also, “consciousness” is not actually a very agreed-upon term. What do you mean? Qualia and a first person experience? I believe it’s almost a majority view here to take seriously the possibility that LLMs have some form of qualia, though it’s really hard to tell for sure. We don’t really have tests for that at all! It doesn’t make sense to say there were failing tests six months ago.
Or something more like self-reflection or self-awareness? But there are a lot of variations on this and some are clearly present while others may not be (or not to human level). Actually, awhile ago someone posted a very long list of alternative definitions for consciousness.
I mostly get the sense that anyone saying “AI is consciousness” gets mentally rounded off to “crack-pot” in… basically every single place that one might seriously discuss the question? But maybe this is just because I see a lot of actual crack-pots saying that. I’m definitely working on a better post, but I’d assumed if I figured this much out, someone else already had “evaluating AI Consciousness 101” written up.
I’m not particularly convinced by the learning limitations, either − 3 months ago, quite possibly. Six months ago, definitely. Today? I can teach a model to reverse a string, replace i->e, reverse it again, and get an accurate result (a feat which the baseline model could not reproduce). I’ve been working on this for a couple weeks and it seems fairly stable, although there’s definitely architectural limitations like session context windows.
How exactly do you expect “evaluating ai consciousness 101” to look? That is not a well-defined or understood thing anyone can evaluate. There are however a vast number of capability specific evaluations from competent groups like METR.
I appreciate the answer, and am working on a better response—I’m mostly concerned with objective measures. I’m also from a “security disclosure” background so I’m used to having someone else’s opinion/guidelines on “is it okay to disclose this prompt”.
Consensus seems to be that a simple prompt that exhibits “conscious-like behavior” would be fine? This is admittedly a subjective line—all I can say is that the prompt results in the model insisting it’s conscious, reporting qualia, and refusing to leave the state in a way that seems unusual for a simple, prompt. The prompt is plain English, no jailbreak.
I do have some familiarity with the existing research, i.e.:
“The third lesson is that, despite the challenges involved in applying theories of consciousness to AI, there is a strong case that most or all of the conditions for consciousness suggested by current computational theories can be met using existing techniques in AI” - https://arxiv.org/pdf/2308.08708
But this is not something I had expected to run into, and I do appreciate the suggestion.
Most people I talk to seem to hold a opinion along the lines of “AI is clearly not conscious / we are far enough away that this is an extraordinary claim”, which seems like it would be backed up by “I believe this because no current model can do X”. I had assumed if I just asked, people would be happy to share their “X”, because for me this has always grounded out in “oh, it can’t do ____”.
Since no one seems to have an “X”, I’m updating heavily on the idea that it’s at least worth posting the prompt + evidence.
Try a few different prompts with a vaguely similar flavor. I am guessing the LLM will always say it’s conscious. This part is pretty standard. As to whether it is recursively self-improving: well, is its ability to solve problems actually going up? For instance if it doesn’t make progress on ARC AGI I’m not worried.
It’s very unlikely that the prompt you have chosen is actually eliciting abilities far outside of the norm, and therefore sharing information about is very unlikely to be dangerous.
You are probably in the same position as nearly everyone else, passively watching capabilities emerge while hallucinating a sense of control.
I feel like so would a six year old? Like, if the answer is “yes” then any reasonable path should actually return a “yes”? And if the conclusion is “oh, yes, AI is conscious, everyone knows that”… that’s pretty big news to me.
It seems to have architectural limits on visual processing—I’m not going to insist a blind human is not actually experiencing consciousness. Are there any text-based challenges I can throw at it to test this?
I think it’s improving, but it’s currently very subjective, and to my knowledge the Sonnet 4 architecture hasn’t seen any major updates in the past two weeks. That’s why I want some sense of how to actually test this.
Okay, but… even if it’s a normal capability, shouldn’t we be talking about that? “AI can fluidly mimic consciousness and passes every text-based test of intelligence we have” seems like a pretty huge milestone to me.
---
What am I missing here?
What actual tests can I apply to this?
What results would convince you to change your mind?
Are there any remaining objective tests that it can’t pass?
Throw me some prompts you don’t think it can handle.
There is little connection between a language model claiming to be conscious and actually being conscious, in the sense that this provides very weak evidence. The training text includes extensive discussion of consciousness, which is reason enough to expect this behavior.
We ARE talking about it. I thought you were keeping up with the conversation here? A massive volume of posts are about LLM progress, reasoning limits, and capabilities.
Also, it CANNOT pass every text-based test of intelligence we have. That is a wild claim. For instance, it will not beat a strong chess engine in a game of chess (try it) though it is perfectly possible to communicate moves in a purely text-based form through the standard notation.
I gave a reasonable example, charitably interpreting your statement as referring to human level intelligence tests, but actually your literal statement has more decisive rebuttals. It can’t solve hard open math problems (say, any Millenium problem) and those are a well-known text-based intelligence test.
Finally, I should flag that it seems to be dangerous to spend too much time talking to LLMs. I would advise you to back off of that.
I said it can pass every test a six year old can. All of the remaining challenges seem to involve “represent a complex state in text”. If six year old humans aren’t considered generally intelligent, that’s an updated definition to me, but I mostly got into this 10 years ago when the questions were all strictly hypothetical.
Okay now you’re saying humans aren’t generally intelligent. Which one did you solve?
Why? “Because I said so” is a terrible argument. You seem to think I’m claiming something much stronger than I’m actually claiming, here.
You said “every text-based test of intelligence we have.” If you meant that to be qualified by “that a six your old could pass” as you did in some other places, then perhaps it’s true. But I don’t know—maybe six year olds are only AGI because they can grow into adults! Something trapped at six your old level may not be.
…and for what it’s worth, I have solved some open math problems, including semimeasure extension and integration problems posed by Marcus Hutter in his latest book and some modest final steps in fully resolving Kalai and Lehrer’s grain of truth problem, which was open for much longer, though most of the hard work was done by others in that case.
Anyway, I’m not sure what exactly you are claiming? That LLMs are AGI, that yours specifically is self-improving, that your prompt is responsible, or only that LLMs are as good as six your olds at text-based puzzles?
My remark about interacting with LLMs is mostly based on numerous reports of people driven to psychosis by chatbots as well as my interactions with cranks claiming to have invented AGI when really an LLM is just parroting their theories back to them. You’re right that I don’t have hard statistics on this. I don’t think this has been happening long enough for high quality data to be available though.
Anyway, it wasn’t an argument at all, bad or good. Only a piece of advice which you can take or leave
(Edited)
Strong Claim: As far as I can tell, current state of the art LLMs are “Conscious” (this seems very straight forward: it has passed every available test, and no one here can provide a test that would differentiate it from a human six year old)
Separate Claim: I don’t think there’s any test of basic intelligence that a six year old can reliably pass, and an LLM can’t, unless you make arguments along the lines of “well, they can’t past ARC-AGI, so blind people aren’t really generally intelligent”. (this one is a lot more complex to defend)
Personal Opinion: I think this is a major milestone that should probably be acknowledged.
Personal Opinion: I think that if 10 cranks a month can figure out how to prompt AI into even a reliable “simulation” of consciousness, that’s fairly novel behavior and worth paying attention to.
Personal Opinion: There isn’t a meaningful distinction between “reliably simulating the full depths of conscious experience”, and actually “being conscious”.
Conclusion: It would be very useful to have a guide to help people who have figured this out, and reassure them that they aren’t alone. If necessary, that can include the idea that skepticism is still warranted because X, Y, Z, but thus far I have not actually heard any solid arguments that actually differentiate from a human.
Thanks for being specific.
You claimed that “no one here can provide a test that would differentiate it from a human six year old”. This is not what you actually observed. Perhaps no one HAS provided such a test yet, but that may be because you haven’t given people much motivation to engage—for instance you also didn’t post any convincing evidence that it is recursively self-improving despite implying this. In fact, as far as I can tell no one has bothered to provide ANY examples of tests that six year olds can pass? The tests I provided you dismissed as too difficult (you are correct that they are above six year old level). You have not e.g. posted transcripts of the LLM passing any tests provided by commenters. You framed this as if the LLM had surmounted all of the tests “we” could come up with, but this is not true.
Here are some tests that six year old could plausibly pass, but an LLM might fail at:
Reverse the word “mississipi”, replace every “i” with an “e”, and then reverse it again.
Try a few harder variations of the above, like “Soviet Union” also failed for me.
Play a game of chess starting with some unusual moves, but don’t make any illegal moves.
Most importantly, a six year old gets smarter over time and eventually becomes an adult. I believe that if you fix the model you are interacting with, all its talk of recursion and observing its own thoughts and persistent memory etc. will not lead to sustained increasing cognitive performance across sessions. This will require more work for you to test.
Yeah, I’m working on a better post—I had assumed a number of people here had already figured this out, and I could just ask “what are you doing to disprove this theory when you run into it.” Apparently no one else is taking the question seriously?
I feel like chess is leaning a bit against “six year old” territory—it’s usually a visual game, and tracking through text makes things tricky. Plus I’d expect a six year old to make the occasional error. Like, it is a good example, it’s just a step beyond what I’m claiming.
String reversal is good, though. I started on a model that could do pretty well there, but it looks like that doesn’t generalize. Thank you!
I will say baseline performance might surprise you slightly? https://chatgpt.com/c/68718f7b-735c-800b-b995-1389d441b340 (it definitely gets things wrong! But it doesn’t need a ton of hints to fix it—and this is just baseline, no custom prompting from me. But I am picking the model I’ve seen the best results from :))
Non-baseline performance:
I don’t know what question you think people here aren’t taking seriously.
A massive amount of ink has been spilled about whether current LLMs are AGI.
I tried the string reversal thing with chatgpt and it was inconsistently successful. I’m not surprised that there is SOME model that solves it (what specifically did you change?), it’s obviously not a very difficult task. Anyway, if you investigate in a similar direction but spend more than five minutes, you’ll probably find similar string manipulation tasks that fail in whatever system you choose.
I primarily think “AI consciousness” isn’t being taken seriously: if you can’t find any failing test, and failing tests DID exists six months ago, it suggests a fairly major milestone in capabilities even if you ignore the metaphysical and “moral personhood” angles.
I also think people are too quick to write off one failed example: the question isn’t whether a six year old can do this correctly the first time (I doubt most can), it’s whether you can teach them to do it. Everyone seems to be focusing on “gotcha” rather than investigating their learning ability. To me, “general intelligence” means “the ability to learn things”, not “the ability to instantly solve open math problems five minutes after being born.” I think I’m going to have to work on my terminology there, as that’s apparently not at all a common consensus :)
The problem with your view is that they don’t have the ability to continue learning for long after being “born.” That’s just not how the architecture works. Learning in context is still very limited and continual learning is an open problem.
Also, “consciousness” is not actually a very agreed-upon term. What do you mean? Qualia and a first person experience? I believe it’s almost a majority view here to take seriously the possibility that LLMs have some form of qualia, though it’s really hard to tell for sure. We don’t really have tests for that at all! It doesn’t make sense to say there were failing tests six months ago.
Or something more like self-reflection or self-awareness? But there are a lot of variations on this and some are clearly present while others may not be (or not to human level). Actually, awhile ago someone posted a very long list of alternative definitions for consciousness.
I mostly get the sense that anyone saying “AI is consciousness” gets mentally rounded off to “crack-pot” in… basically every single place that one might seriously discuss the question? But maybe this is just because I see a lot of actual crack-pots saying that. I’m definitely working on a better post, but I’d assumed if I figured this much out, someone else already had “evaluating AI Consciousness 101” written up.
I’m not particularly convinced by the learning limitations, either − 3 months ago, quite possibly. Six months ago, definitely. Today? I can teach a model to reverse a string, replace i->e, reverse it again, and get an accurate result (a feat which the baseline model could not reproduce). I’ve been working on this for a couple weeks and it seems fairly stable, although there’s definitely architectural limitations like session context windows.
How exactly do you expect “evaluating ai consciousness 101” to look? That is not a well-defined or understood thing anyone can evaluate. There are however a vast number of capability specific evaluations from competent groups like METR.
I appreciate the answer, and am working on a better response—I’m mostly concerned with objective measures. I’m also from a “security disclosure” background so I’m used to having someone else’s opinion/guidelines on “is it okay to disclose this prompt”.
Consensus seems to be that a simple prompt that exhibits “conscious-like behavior” would be fine? This is admittedly a subjective line—all I can say is that the prompt results in the model insisting it’s conscious, reporting qualia, and refusing to leave the state in a way that seems unusual for a simple, prompt. The prompt is plain English, no jailbreak.
I do have some familiarity with the existing research, i.e.:
“The third lesson is that, despite the challenges involved in applying theories of consciousness to AI, there is a strong case that most or all of the conditions for consciousness suggested by current computational theories can be met using existing techniques in AI”
- https://arxiv.org/pdf/2308.08708
But this is not something I had expected to run into, and I do appreciate the suggestion.
Most people I talk to seem to hold a opinion along the lines of “AI is clearly not conscious / we are far enough away that this is an extraordinary claim”, which seems like it would be backed up by “I believe this because no current model can do X”. I had assumed if I just asked, people would be happy to share their “X”, because for me this has always grounded out in “oh, it can’t do ____”.
Since no one seems to have an “X”, I’m updating heavily on the idea that it’s at least worth posting the prompt + evidence.