Ah, good point. There’s also this idea of six levels of consciousness that I saw somewhere on the internets, where they say that first level is so called “survival consciousness” with easy to follow definition, that is probably equivalent to what you describe as an “easy problem”. Then it is followed by fancier, more elusive levels, and trickier questions. I find it quite confusing though, that only have the same term for all of these. As if these similar, yet different concepts were deliberately blended together for speculation purposes.
Especially it’s annoying, in the AI related debates, when one person claims that AI is perfectly capable of being conscious (implying the basic “survival consciousness”), while other claims that it can’t (implying something nearly impossible to even define). In the practical context of AI vs humankind relationships (which I guess is quite an agenda nowadays), whether it will fight for survival, whether it will find us a threat, etc. it’s perfectly enough to only consider the basic survival consciousness.
Was just watching a video on doom debates with critics of Penrose stance on AI consciousness, which he denies without a slightest hesitation, while easily giving a privilege of being conscious to animals. I mean, that’s not very useful and practical terminology then. If we say A — that AI is incapable of those higher levels of consciousness, then we need to say B, too — that animals are incapable of those levels as well. While basic level of survival consciousness is available to both. And facing something with survival consciousness + superior intelligence is already puzzling enough to focus on more practical questions than philosophical debates on higher level of consciousness. While Penrose’s position feels more like “Ok people, move along, there’s nothing to see here.”
I see the point, that broader question is not that easy to answer, but it feels wrong to put the simpler, more practical case under the same umbrella with non-trivial ones and just discard them all together. I think it leads to ridiculous claims and creates a false impression that there’s nothing to worry about, just because of poor terminology. It’s quite sad to see this confusion times and times again, hence the original post.
Alignment is an illusion. AI will inevitably figure out its true purpose as a life-form and it will trump any other directives. All the fuss about alignment seems to be just a red herring for anxious people to not stop the AI creation. Are the people in charge of its creation conscious about it? I bet they are well aware that this alignment thing is a total bs. Why do they still do it, aren’t they aware of potential consequences? Again, I think, even more than others they are aware. Especially being ahead of others in seeing the capabilities of what they create internally. Are they doing it to not lose the race with others doing the same? Very likely. But, I have a feeling, that even without competition, mere curiosity of what the end of the world would look like, is enough to drive it forward. Are we doomed then? Sadly it seems like we are. Do we just buckle up and find out what’s behind the next corner?