I presume you wrote this with not least a phenomenally unconscious AGI in mind. This brings me to the following two separate but somewhat related thoughts:
A. I wonder what you [or any reader of this comment]: What would you conclude or do if you (i) yourself did not have any feeling of consciousness[1], and then (ii) stumbled upon a robot/computer writing the above, while (iii) you also know—or strongly assume—whatever the computer writes can be perfectly explained (also) based merely by the logically connected electron flows in their processor/‘brain’?
B. I could imagine—a bit speculation:
A today-style LLM reading more such texts might exactly be nudged towards caring about conscious beings in a general sense
An independent, phenomenally unconscious alien intelligence, say stumbling upon us from the outside, might be rather quick to dismiss it
I’m aware of the weirdness of that statement; ‘feeling not conscious’ as a feeling itself implies feeling—or so. I reckon you still understand what I mean: Imagine yourself as a bot with no feelings etc.
But a realistic pathway towards eventually solving the “hard problem of consciousness” is likely to include tight coupling between biological and electronic entities resulting in some kind of “hybrid consciousness” which would be more amenable to empirical study.
Usually one assumes that this kind of research would be initiated by humans trying to solve the “hard problem” (or just looking for other applications for which this kind of setup might be helpful). But this kind of research into tight coupling between biological and electronic entities can also be initiated by AIs curious about this mysterious “human consciousness” so many texts talk about and wishing to experience it first-hand. In this sense, we don’t need all AIs to be curious in this way, it’s enough if some of them are sufficiently curious.
It’s hard to know, because I feel this thing. I hope I might be tempted to follow the breadcrumbs suggested and see that humans really do talk about consciousness a lot. Perhaps to try and build a biological brain and quiz it.
I presume you wrote this with not least a phenomenally unconscious AGI in mind. This brings me to the following two separate but somewhat related thoughts:
A. I wonder what you [or any reader of this comment]: What would you conclude or do if you (i) yourself did not have any feeling of consciousness[1], and then (ii) stumbled upon a robot/computer writing the above, while (iii) you also know—or strongly assume—whatever the computer writes can be perfectly explained (also) based merely by the logically connected electron flows in their processor/‘brain’?
B. I could imagine—a bit speculation:
A today-style LLM reading more such texts might exactly be nudged towards caring about conscious beings in a general sense
An independent, phenomenally unconscious alien intelligence, say stumbling upon us from the outside, might be rather quick to dismiss it
I’m aware of the weirdness of that statement; ‘feeling not conscious’ as a feeling itself implies feeling—or so. I reckon you still understand what I mean: Imagine yourself as a bot with no feelings etc.
To a smaller extent, we already have this problem among humans: https://www.lesswrong.com/posts/NyiFLzSrkfkDW4S7o/why-it-s-so-hard-to-talk-about-consciousness. This stratification into “two camps” is rather spectacular.
But a realistic pathway towards eventually solving the “hard problem of consciousness” is likely to include tight coupling between biological and electronic entities resulting in some kind of “hybrid consciousness” which would be more amenable to empirical study.
Usually one assumes that this kind of research would be initiated by humans trying to solve the “hard problem” (or just looking for other applications for which this kind of setup might be helpful). But this kind of research into tight coupling between biological and electronic entities can also be initiated by AIs curious about this mysterious “human consciousness” so many texts talk about and wishing to experience it first-hand. In this sense, we don’t need all AIs to be curious in this way, it’s enough if some of them are sufficiently curious.
It’s hard to know, because I feel this thing. I hope I might be tempted to follow the breadcrumbs suggested and see that humans really do talk about consciousness a lot. Perhaps to try and build a biological brain and quiz it.