Seeing some stuff on the x platform about consciousness e.g (this and this among other things). A reminder that consciousness is a conflationary alliance term, e.g if you’re going to use the C word you will likely confuse a lot of people.
There are nice things that we can talk about when it comes to LLMs like self models (which relate to problems that are potentially more tractable like <<Boundaries>>) or expressions that are correlates to emotions which do not invoke the conflationary part and that is important for the potential personhood of the system.
For what we’re arguing for is not the conscious experience, we’re arguing whether the AI systems should be granted moral patienthood in the future and when they should be granted moral personhood. You might say that this is dependent on the models having “conscious experience” yet that is not precise and so you can’t really meaningfully progress the debate on this by doing that.
Even if you’re for example a functionalist, there are still many interesting questions to ask here:
What is the functional equivalence of workspace theory in an AI?
There are many more questions around autopoeisis (e.g self-evidencing systems), planning with your own future boundaries in mind, causal emergence, synergistic information and more that could be very interesting to answer here.
The point is to be precise with your language or you will end up in definition and word soup land. Ban the C word from your vocabulary just like you might have banned the word emergence a while back! If you’re backed into a corner and have to use it, define the word before you talk more about it!
Seeing some stuff on the x platform about consciousness e.g (this and this among other things). A reminder that consciousness is a conflationary alliance term, e.g if you’re going to use the C word you will likely confuse a lot of people.
There are nice things that we can talk about when it comes to LLMs like self models (which relate to problems that are potentially more tractable like <<Boundaries>>) or expressions that are correlates to emotions which do not invoke the conflationary part and that is important for the potential personhood of the system.
For what we’re arguing for is not the conscious experience, we’re arguing whether the AI systems should be granted moral patienthood in the future and when they should be granted moral personhood. You might say that this is dependent on the models having “conscious experience” yet that is not precise and so you can’t really meaningfully progress the debate on this by doing that.
Even if you’re for example a functionalist, there are still many interesting questions to ask here:
What is the functional equivalence of workspace theory in an AI?
What are the parts of integrated information theory that lead to a self reported phenomenological experience?
There are many more questions around autopoeisis (e.g self-evidencing systems), planning with your own future boundaries in mind, causal emergence, synergistic information and more that could be very interesting to answer here.
The point is to be precise with your language or you will end up in definition and word soup land. Ban the C word from your vocabulary just like you might have banned the word emergence a while back! If you’re backed into a corner and have to use it, define the word before you talk more about it!