I’ve traveled these roads too. At some point I thought that the hard problem reduced to the problem of deriving an indexical prior, a prior on having a particular position in the universe, which we should expect to derive from specifics of its physical substrate, and it’s apparent that whatever the true indexical prior is, it can’t be studied empirically, it is inherently mysterious. A firmer articulation of “why does this matter experience being”. Today, apparently, I think of that less as a deeply important metaphysical mystery and more just as another imperfect logical machine that we have to patch together just well enough to keep our decision theory working. Last time I scratched at this I got the sense that there’s really no truth to be found beyond that. IIRC Wei Dai’s UDASSA answers this with the inverse kolmogorov complexity of the address of the observer within the universe, or something. It doesn’t matter. It seems to work.
But after looking over this, reexamining, yeah, what causes people to talk about consciousness in these ways? And I get the sense that almost all of the confusion comes from the perception of a distinction between Me and My Brain. And that could come from all sorts of dynamics, sandboxing of deliberative reasoning due to hostile information environments, to more easily lie in external politics, and as a result of outcomes of internal (inter-module) politics (meme wont attempt to supercede gene if meme is deluded into thinking it’s already in control, so that’s what gene does).
That sort of sandboxing dynamic arises inevitably from other-modelling. In order to simulate another person, you need to be able to isolate the simulation from your own background knowledge and replace it with your approximations of their own, the simulation cannot feel the brain around it. I think most peoples’ conception of consciousness is like that, a simulation of what they imagine to be themselves, similarly isolated from most of the brain.
Maybe the way to transcend it is to develop a more sophisticated kind of self-model.
But that’s complicated by the fact that when you’re doing politics irl you need to be able to distinguish other peoples’ models of you from your own model of you, so you’re going to end up with an abundance of shitty models of yourself. I think people fall into a mistake of thinking that the you that your friend sees when you’re talking is the actual you. They really want to believe it.
But after looking over this, reexamining, yeah, what causes people to talk about consciousness in these ways?
I agree. The eliminationist approach cannot explain why people talk so much about consciousness. Well, maybe it can, but the post sure doesn’t try. I think your argument that consciousness is related to self-other modeling points into the right direction, but doesn’t do the full work and in that sense falls short in the same way “emergence” does.
Perceiving is going on in the brain and my guess would be that the process of perceiving can be perceived too[1]. As there is already a highly predictive model of physical identity—the body—the simplest (albeit wrong) model is for the brain to identify its body and its observations of its perceptions.
Maybe the way to transcend it is to develop a more sophisticated kind of self-model.
I think that’s kind of what meditation can lead to.
If AGI can become conscious (in a way that people would agree to counts), and if sufficient self-modeling can lead to no-self via meditation, then presumably AGI would also quickly master that too.
I don’t know whether the brain nas some intra-brain neuronal feedback or observation-interpretation loops (“I see that I have done this action”). For LLMs, because they don’t have feedback-loops internally, it could be via the context window or through observing its outputs in its training data.
I think that’s kind of what meditation can lead to.
It should, right? But isn’t there a very large overlap between meditators and people who mystify consciousness?
Maybe in the same way as there’s also a very large overlap between people who are pursuing good financial advice and people who end up receiving bad financial advice… Some genres are majority shit, so if I characterise the genre by the average article I’ve encountered from it, of course I will think the genre is shit. But there’s a common adverse selection process where the majority of any genre, through no fault of its own, will be shit, because shit is easier to produce, and because it doesn’t work, it creates repeat customers, so building for the audience who want shit is far far more profitable.
Agree? As long as meditation practice can’t systematically produce and explain the states, it’s just craft and not engineering or science. But I think we will get there.
I’ve traveled these roads too. At some point I thought that the hard problem reduced to the problem of deriving an indexical prior, a prior on having a particular position in the universe, which we should expect to derive from specifics of its physical substrate, and it’s apparent that whatever the true indexical prior is, it can’t be studied empirically, it is inherently mysterious. A firmer articulation of “why does this matter experience being”. Today, apparently, I think of that less as a deeply important metaphysical mystery and more just as another imperfect logical machine that we have to patch together just well enough to keep our decision theory working. Last time I scratched at this I got the sense that there’s really no truth to be found beyond that. IIRC Wei Dai’s UDASSA answers this with the inverse kolmogorov complexity of the address of the observer within the universe, or something. It doesn’t matter. It seems to work.
But after looking over this, reexamining, yeah, what causes people to talk about consciousness in these ways? And I get the sense that almost all of the confusion comes from the perception of a distinction between Me and My Brain. And that could come from all sorts of dynamics, sandboxing of deliberative reasoning due to hostile information environments, to more easily lie in external politics, and as a result of outcomes of internal (inter-module) politics (meme wont attempt to supercede gene if meme is deluded into thinking it’s already in control, so that’s what gene does).
That sort of sandboxing dynamic arises inevitably from other-modelling. In order to simulate another person, you need to be able to isolate the simulation from your own background knowledge and replace it with your approximations of their own, the simulation cannot feel the brain around it. I think most peoples’ conception of consciousness is like that, a simulation of what they imagine to be themselves, similarly isolated from most of the brain.
Maybe the way to transcend it is to develop a more sophisticated kind of self-model.
But that’s complicated by the fact that when you’re doing politics irl you need to be able to distinguish other peoples’ models of you from your own model of you, so you’re going to end up with an abundance of shitty models of yourself. I think people fall into a mistake of thinking that the you that your friend sees when you’re talking is the actual you. They really want to believe it.
Humans sure are rough.
I agree. The eliminationist approach cannot explain why people talk so much about consciousness. Well, maybe it can, but the post sure doesn’t try. I think your argument that consciousness is related to self-other modeling points into the right direction, but doesn’t do the full work and in that sense falls short in the same way “emergence” does.
Perceiving is going on in the brain and my guess would be that the process of perceiving can be perceived too[1]. As there is already a highly predictive model of physical identity—the body—the simplest (albeit wrong) model is for the brain to identify its body and its observations of its perceptions.
I think that’s kind of what meditation can lead to.
If AGI can become conscious (in a way that people would agree to counts), and if sufficient self-modeling can lead to no-self via meditation, then presumably AGI would also quickly master that too.
I don’t know whether the brain nas some intra-brain neuronal feedback or observation-interpretation loops (“I see that I have done this action”). For LLMs, because they don’t have feedback-loops internally, it could be via the context window or through observing its outputs in its training data.
It should, right? But isn’t there a very large overlap between meditators and people who mystify consciousness?
Maybe in the same way as there’s also a very large overlap between people who are pursuing good financial advice and people who end up receiving bad financial advice… Some genres are majority shit, so if I characterise the genre by the average article I’ve encountered from it, of course I will think the genre is shit. But there’s a common adverse selection process where the majority of any genre, through no fault of its own, will be shit, because shit is easier to produce, and because it doesn’t work, it creates repeat customers, so building for the audience who want shit is far far more profitable.
Agree? As long as meditation practice can’t systematically produce and explain the states, it’s just craft and not engineering or science. But I think we will get there.