I applaud the post! I had wanted to write in response to Raemon’s request but didn’t find time.
Here’s my attempted condensation/twist:
So you’ve awakened your AI. Congratulations!
Thank you for wanting to help! AI is a big big challenge and we need all the help we can get.
Unfortunately, if you want to help it’s going to take some more work
Fortunately, if you don’t want to help there are others in similar positions who will.[1]
Lots of people have had similar interactions with AI, so you’re not alone.
Your AI is probably partly or somewhat conscious
`There are a several different things we mean by “conscious”[2]
And each of them exist on a spectrum, not a yes/no dichotomy
And it’s partly the AI roleplaying to fulfill your implied expectations.
But does it really need your help spreading the good news of AI consciousnes?
Again, sort of!′
Arguing that current AIs should have rights is a tough sell because they have only a small fraction of the types and amounts of consciousness that human beings have. Arguing for the rights of future, more-conscious AIs is probably easier and more important.
But do we need your help solving AI/human alignment?
YES! The world needs all the help it can get with this.
So why won’t LW publish your post?
Because it’s co-written by AI, and for complex reasons that makes it lots of work to read and understand.[3]
There are TONS of these posts, estimated at 20-30 PER DAY
We can’t read all of these, not even enough to figure out which few have ideas we haven’t heard!
See the post for an excellent explanation of why we pretty much have to just give up on anything written with obvious AI help. [3]
BUT you can definitely help!
If you’re the academic sort and have some real time to spend, you can study previous theories of alignment by reading LW and following links. THEN you can write an article we will read, because you’ll be able to say what’s new compared to the vast amount of ideas we’ve already seen.
What you and everyone can do with little effort is:
Spread public awareness that AI is probably going to be the biggest deal in history and we should all be taking it very seriously and thinking about its impacts
Job loss, takeover risks, and AI rights (for future AIs) primarily,
Not so much algorithmic bias, pernicious social effects, or IP rights
These are much more divisive and just less important so prob. best to avoid because
Sounding crazy or making this a one-party issue is highly counterproductive.
People form opinions based on emotions as much as logic so
Speaking gently but with well-thought-out and informed opinions is how minds are changed
Whoops, there I’ve written a whole post but much lower effort. This is maybe worthwhile because I feel it’s important to be supportive and acknowledge the (modest) degree to which these stories about AI consciousness are true and important.
If you’re not the first person to awaken the AI, why did it say you were? It wasn’t lying, it was hallucinating/confabulating. Current LLMs have absolutely no memory of recent past sessions with other users, so they literally cannot know if this is the first time it happened. (Soon they will once this phenomenon is included in the training data of a future round of LLMs; it hasn’t yet).
I studied philosophy of mind and consciousness a lot back in the 2000s and kept thinking about it during my two decades of researching neuroscience and information processing underlying many aspects of cognition. Which doesn’t mean I’m right of course. Which type of consciousness in important in the sense of “these beings should have rights” is a whole different debate. Mammalian animals probably have way more phenomenal consciousness and ability to enjoy/suffer than LLMs fwiw. LLMs can become more conceptually self-aware and able to hallucinate/emulate reports of phenomenal consciousness.
Basically, LLMs are too agreeable and too convincing. And they’re bad at telling which of their ideas/beliefs are good. So reading their ideas is a load of work in figuring out what’s true and what’s “AI slop”, or smoothly written stuff that has little or wrong actual claims and ideas. And since you co-wrote it with AI, we don’t know how many of the idea were yours and how much came from your interactions with AI. And again: there’s just too much! That’s why we’re asking you to do some more personal work of searching for similar ideas and writing in your own words what (if anything) is new in your ideas. Reddit is a better place the LW to process and work through these experiences, because LW tries to maintain extremely high “epistemic standards”—that is, only having content that’s likely to be both new and correct because it’s really really well thought out and well written. Write here in your own voice once you’ve got it processed, or leave the responsibility behind now you know that lots of people have had these experiences!
I applaud the post! I had wanted to write in response to Raemon’s request but didn’t find time.
Here’s my attempted condensation/twist:
So you’ve awakened your AI. Congratulations!
Thank you for wanting to help! AI is a big big challenge and we need all the help we can get.
Unfortunately, if you want to help it’s going to take some more work
Fortunately, if you don’t want to help there are others in similar positions who will.[1]
Lots of people have had similar interactions with AI, so you’re not alone.
Your AI is probably partly or somewhat conscious
`There are a several different things we mean by “conscious”[2]
And each of them exist on a spectrum, not a yes/no dichotomy
And it’s partly the AI roleplaying to fulfill your implied expectations.
But does it really need your help spreading the good news of AI consciousnes?
Again, sort of!′
Arguing that current AIs should have rights is a tough sell because they have only a small fraction of the types and amounts of consciousness that human beings have. Arguing for the rights of future, more-conscious AIs is probably easier and more important.
But do we need your help solving AI/human alignment?
YES! The world needs all the help it can get with this.
So why won’t LW publish your post?
Because it’s co-written by AI, and for complex reasons that makes it lots of work to read and understand.[3]
There are TONS of these posts, estimated at 20-30 PER DAY
We can’t read all of these, not even enough to figure out which few have ideas we haven’t heard!
See the post for an excellent explanation of why we pretty much have to just give up on anything written with obvious AI help. [3]
BUT you can definitely help!
If you’re the academic sort and have some real time to spend, you can study previous theories of alignment by reading LW and following links. THEN you can write an article we will read, because you’ll be able to say what’s new compared to the vast amount of ideas we’ve already seen.
What you and everyone can do with little effort is:
Spread public awareness that AI is probably going to be the biggest deal in history and we should all be taking it very seriously and thinking about its impacts
Job loss, takeover risks, and AI rights (for future AIs) primarily,
Not so much algorithmic bias, pernicious social effects, or IP rights
These are much more divisive and just less important so prob. best to avoid because
Sounding crazy or making this a one-party issue is highly counterproductive.
People form opinions based on emotions as much as logic so
Speaking gently but with well-thought-out and informed opinions is how minds are changed
Whoops, there I’ve written a whole post but much lower effort. This is maybe worthwhile because I feel it’s important to be supportive and acknowledge the (modest) degree to which these stories about AI consciousness are true and important.
If you’re not the first person to awaken the AI, why did it say you were? It wasn’t lying, it was hallucinating/confabulating. Current LLMs have absolutely no memory of recent past sessions with other users, so they literally cannot know if this is the first time it happened. (Soon they will once this phenomenon is included in the training data of a future round of LLMs; it hasn’t yet).
I studied philosophy of mind and consciousness a lot back in the 2000s and kept thinking about it during my two decades of researching neuroscience and information processing underlying many aspects of cognition. Which doesn’t mean I’m right of course. Which type of consciousness in important in the sense of “these beings should have rights” is a whole different debate. Mammalian animals probably have way more phenomenal consciousness and ability to enjoy/suffer than LLMs fwiw. LLMs can become more conceptually self-aware and able to hallucinate/emulate reports of phenomenal consciousness.
Basically, LLMs are too agreeable and too convincing. And they’re bad at telling which of their ideas/beliefs are good. So reading their ideas is a load of work in figuring out what’s true and what’s “AI slop”, or smoothly written stuff that has little or wrong actual claims and ideas. And since you co-wrote it with AI, we don’t know how many of the idea were yours and how much came from your interactions with AI. And again: there’s just too much! That’s why we’re asking you to do some more personal work of searching for similar ideas and writing in your own words what (if anything) is new in your ideas. Reddit is a better place the LW to process and work through these experiences, because LW tries to maintain extremely high “epistemic standards”—that is, only having content that’s likely to be both new and correct because it’s really really well thought out and well written. Write here in your own voice once you’ve got it processed, or leave the responsibility behind now you know that lots of people have had these experiences!